00:00:00.000 Started by upstream project "autotest-nightly" build number 3923 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3298 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.040 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.097 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.199 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.199 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.704 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.714 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.726 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:03.726 > git config core.sparsecheckout # timeout=10 00:00:03.736 > git read-tree -mu HEAD # timeout=10 00:00:03.752 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:03.775 Commit message: "packer: Add bios builder" 00:00:03.775 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:03.874 [Pipeline] Start of Pipeline 00:00:03.888 [Pipeline] library 00:00:03.889 Loading library shm_lib@master 00:00:03.890 Library shm_lib@master is cached. Copying from home. 00:00:03.905 [Pipeline] node 00:00:03.922 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:03.923 [Pipeline] { 00:00:03.931 [Pipeline] catchError 00:00:03.932 [Pipeline] { 00:00:03.941 [Pipeline] wrap 00:00:03.947 [Pipeline] { 00:00:03.953 [Pipeline] stage 00:00:03.954 [Pipeline] { (Prologue) 00:00:03.970 [Pipeline] echo 00:00:03.971 Node: VM-host-SM9 00:00:03.975 [Pipeline] cleanWs 00:00:04.959 [WS-CLEANUP] Deleting project workspace... 00:00:04.959 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.966 [WS-CLEANUP] done 00:00:05.128 [Pipeline] setCustomBuildProperty 00:00:05.184 [Pipeline] httpRequest 00:00:05.208 [Pipeline] echo 00:00:05.209 Sorcerer 10.211.164.101 is alive 00:00:05.216 [Pipeline] httpRequest 00:00:05.219 HttpMethod: GET 00:00:05.220 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.220 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:05.229 Response Code: HTTP/1.1 200 OK 00:00:05.229 Success: Status code 200 is in the accepted range: 200,404 00:00:05.230 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.084 [Pipeline] sh 00:00:07.363 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.375 [Pipeline] httpRequest 00:00:07.398 [Pipeline] echo 00:00:07.399 Sorcerer 10.211.164.101 is alive 00:00:07.404 [Pipeline] httpRequest 00:00:07.408 HttpMethod: GET 00:00:07.408 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:07.409 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:07.424 Response Code: HTTP/1.1 200 OK 00:00:07.425 Success: Status code 200 is in the accepted range: 200,404 00:00:07.425 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:58.080 [Pipeline] sh 00:00:58.361 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:01.657 [Pipeline] sh 00:01:01.936 + git -C spdk log --oneline -n5 00:01:01.936 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:01.936 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:01.936 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:01.936 d005e023b raid: fix empty slot not updated in sb after resize 00:01:01.936 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:01.954 [Pipeline] writeFile 00:01:01.969 [Pipeline] sh 00:01:02.286 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:02.297 [Pipeline] sh 00:01:02.598 + cat autorun-spdk.conf 00:01:02.598 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.598 SPDK_TEST_NVME=1 00:01:02.598 SPDK_TEST_FTL=1 00:01:02.598 SPDK_TEST_ISAL=1 00:01:02.598 SPDK_RUN_ASAN=1 00:01:02.598 SPDK_RUN_UBSAN=1 00:01:02.598 SPDK_TEST_XNVME=1 00:01:02.598 SPDK_TEST_NVME_FDP=1 00:01:02.598 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.605 RUN_NIGHTLY=1 00:01:02.606 [Pipeline] } 00:01:02.622 [Pipeline] // stage 00:01:02.636 [Pipeline] stage 00:01:02.639 [Pipeline] { (Run VM) 00:01:02.652 [Pipeline] sh 00:01:02.932 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:02.932 + echo 'Start stage prepare_nvme.sh' 00:01:02.932 Start stage prepare_nvme.sh 00:01:02.932 + [[ -n 1 ]] 00:01:02.932 + disk_prefix=ex1 00:01:02.932 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:02.932 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:02.932 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:02.932 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.932 ++ SPDK_TEST_NVME=1 00:01:02.932 ++ SPDK_TEST_FTL=1 00:01:02.932 ++ SPDK_TEST_ISAL=1 00:01:02.932 ++ SPDK_RUN_ASAN=1 00:01:02.932 ++ SPDK_RUN_UBSAN=1 00:01:02.932 ++ SPDK_TEST_XNVME=1 00:01:02.932 ++ SPDK_TEST_NVME_FDP=1 00:01:02.932 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.932 ++ RUN_NIGHTLY=1 00:01:02.932 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:02.932 + nvme_files=() 00:01:02.932 + declare -A nvme_files 00:01:02.932 + backend_dir=/var/lib/libvirt/images/backends 00:01:02.932 + nvme_files['nvme.img']=5G 00:01:02.932 + nvme_files['nvme-cmb.img']=5G 00:01:02.932 + nvme_files['nvme-multi0.img']=4G 00:01:02.932 + nvme_files['nvme-multi1.img']=4G 00:01:02.932 + nvme_files['nvme-multi2.img']=4G 00:01:02.932 + nvme_files['nvme-openstack.img']=8G 00:01:02.932 + nvme_files['nvme-zns.img']=5G 00:01:02.932 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:02.932 + (( SPDK_TEST_FTL == 1 )) 00:01:02.932 + nvme_files["nvme-ftl.img"]=6G 00:01:02.932 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:02.932 + nvme_files["nvme-fdp.img"]=1G 00:01:02.932 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:02.932 + for nvme in "${!nvme_files[@]}" 00:01:02.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:02.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.932 + for nvme in "${!nvme_files[@]}" 00:01:02.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:02.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:02.932 + for nvme in "${!nvme_files[@]}" 00:01:02.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:02.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:02.932 + for nvme in "${!nvme_files[@]}" 00:01:02.932 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:02.932 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:03.191 + for nvme in "${!nvme_files[@]}" 00:01:03.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:03.192 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.192 + for nvme in "${!nvme_files[@]}" 00:01:03.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:03.192 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.192 + for nvme in "${!nvme_files[@]}" 00:01:03.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:03.192 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.192 + for nvme in "${!nvme_files[@]}" 00:01:03.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:03.192 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:03.192 + for nvme in "${!nvme_files[@]}" 00:01:03.192 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:03.450 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.450 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:03.450 + echo 'End stage prepare_nvme.sh' 00:01:03.450 End stage prepare_nvme.sh 00:01:03.461 [Pipeline] sh 00:01:03.739 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:03.739 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:03.739 00:01:03.739 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:03.739 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:03.739 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:03.739 HELP=0 00:01:03.739 DRY_RUN=0 00:01:03.739 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:03.739 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:03.739 NVME_AUTO_CREATE=0 00:01:03.739 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:03.739 NVME_CMB=,,,, 00:01:03.739 NVME_PMR=,,,, 00:01:03.739 NVME_ZNS=,,,, 00:01:03.739 NVME_MS=true,,,, 00:01:03.739 NVME_FDP=,,,on, 00:01:03.739 SPDK_VAGRANT_DISTRO=fedora38 00:01:03.739 SPDK_VAGRANT_VMCPU=10 00:01:03.739 SPDK_VAGRANT_VMRAM=12288 00:01:03.739 SPDK_VAGRANT_PROVIDER=libvirt 00:01:03.739 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:03.739 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:03.739 SPDK_OPENSTACK_NETWORK=0 00:01:03.739 VAGRANT_PACKAGE_BOX=0 00:01:03.739 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:03.739 FORCE_DISTRO=true 00:01:03.739 VAGRANT_BOX_VERSION= 00:01:03.739 EXTRA_VAGRANTFILES= 00:01:03.739 NIC_MODEL=e1000 00:01:03.739 00:01:03.740 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:01:03.740 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:06.271 Bringing machine 'default' up with 'libvirt' provider... 00:01:07.258 ==> default: Creating image (snapshot of base box volume). 00:01:07.258 ==> default: Creating domain with the following settings... 00:01:07.258 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1722002786_124b7288d0794f7de080 00:01:07.258 ==> default: -- Domain type: kvm 00:01:07.258 ==> default: -- Cpus: 10 00:01:07.258 ==> default: -- Feature: acpi 00:01:07.258 ==> default: -- Feature: apic 00:01:07.258 ==> default: -- Feature: pae 00:01:07.258 ==> default: -- Memory: 12288M 00:01:07.258 ==> default: -- Memory Backing: hugepages: 00:01:07.258 ==> default: -- Management MAC: 00:01:07.258 ==> default: -- Loader: 00:01:07.258 ==> default: -- Nvram: 00:01:07.258 ==> default: -- Base box: spdk/fedora38 00:01:07.258 ==> default: -- Storage pool: default 00:01:07.258 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1722002786_124b7288d0794f7de080.img (20G) 00:01:07.258 ==> default: -- Volume Cache: default 00:01:07.258 ==> default: -- Kernel: 00:01:07.258 ==> default: -- Initrd: 00:01:07.258 ==> default: -- Graphics Type: vnc 00:01:07.258 ==> default: -- Graphics Port: -1 00:01:07.258 ==> default: -- Graphics IP: 127.0.0.1 00:01:07.258 ==> default: -- Graphics Password: Not defined 00:01:07.258 ==> default: -- Video Type: cirrus 00:01:07.258 ==> default: -- Video VRAM: 9216 00:01:07.258 ==> default: -- Sound Type: 00:01:07.258 ==> default: -- Keymap: en-us 00:01:07.258 ==> default: -- TPM Path: 00:01:07.258 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:07.258 ==> default: -- Command line args: 00:01:07.258 ==> default: -> value=-device, 00:01:07.258 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:07.258 ==> default: -> value=-drive, 00:01:07.258 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:07.258 ==> default: -> value=-device, 00:01:07.258 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:07.258 ==> default: -> value=-device, 00:01:07.258 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:07.258 ==> default: -> value=-drive, 00:01:07.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:07.259 ==> default: -> value=-drive, 00:01:07.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.259 ==> default: -> value=-drive, 00:01:07.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.259 ==> default: -> value=-drive, 00:01:07.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:07.259 ==> default: -> value=-drive, 00:01:07.259 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:07.259 ==> default: -> value=-device, 00:01:07.259 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:07.259 ==> default: Creating shared folders metadata... 00:01:07.259 ==> default: Starting domain. 00:01:08.638 ==> default: Waiting for domain to get an IP address... 00:01:23.520 ==> default: Waiting for SSH to become available... 00:01:24.897 ==> default: Configuring and enabling network interfaces... 00:01:30.168 default: SSH address: 192.168.121.81:22 00:01:30.168 default: SSH username: vagrant 00:01:30.168 default: SSH auth method: private key 00:01:31.546 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:39.664 ==> default: Mounting SSHFS shared folder... 00:01:40.598 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:40.598 ==> default: Checking Mount.. 00:01:41.972 ==> default: Folder Successfully Mounted! 00:01:41.972 ==> default: Running provisioner: file... 00:01:42.538 default: ~/.gitconfig => .gitconfig 00:01:43.104 00:01:43.104 SUCCESS! 00:01:43.104 00:01:43.104 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:43.104 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:43.104 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:43.104 00:01:43.113 [Pipeline] } 00:01:43.131 [Pipeline] // stage 00:01:43.138 [Pipeline] dir 00:01:43.138 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:43.140 [Pipeline] { 00:01:43.151 [Pipeline] catchError 00:01:43.152 [Pipeline] { 00:01:43.163 [Pipeline] sh 00:01:43.439 + vagrant ssh-config --host vagrant 00:01:43.439 + sed -ne /^Host/,$p 00:01:43.439 + tee ssh_conf 00:01:46.734 Host vagrant 00:01:46.734 HostName 192.168.121.81 00:01:46.734 User vagrant 00:01:46.734 Port 22 00:01:46.734 UserKnownHostsFile /dev/null 00:01:46.734 StrictHostKeyChecking no 00:01:46.734 PasswordAuthentication no 00:01:46.734 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:46.734 IdentitiesOnly yes 00:01:46.734 LogLevel FATAL 00:01:46.734 ForwardAgent yes 00:01:46.734 ForwardX11 yes 00:01:46.734 00:01:46.747 [Pipeline] withEnv 00:01:46.749 [Pipeline] { 00:01:46.764 [Pipeline] sh 00:01:47.041 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:47.041 source /etc/os-release 00:01:47.041 [[ -e /image.version ]] && img=$(< /image.version) 00:01:47.041 # Minimal, systemd-like check. 00:01:47.041 if [[ -e /.dockerenv ]]; then 00:01:47.041 # Clear garbage from the node's name: 00:01:47.041 # agt-er_autotest_547-896 -> autotest_547-896 00:01:47.041 # $HOSTNAME is the actual container id 00:01:47.041 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:47.041 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:47.041 # We can assume this is a mount from a host where container is running, 00:01:47.041 # so fetch its hostname to easily identify the target swarm worker. 00:01:47.041 container="$(< /etc/hostname) ($agent)" 00:01:47.041 else 00:01:47.041 # Fallback 00:01:47.041 container=$agent 00:01:47.041 fi 00:01:47.041 fi 00:01:47.041 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:47.041 00:01:47.052 [Pipeline] } 00:01:47.071 [Pipeline] // withEnv 00:01:47.079 [Pipeline] setCustomBuildProperty 00:01:47.092 [Pipeline] stage 00:01:47.094 [Pipeline] { (Tests) 00:01:47.110 [Pipeline] sh 00:01:47.389 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:47.402 [Pipeline] sh 00:01:47.681 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:47.952 [Pipeline] timeout 00:01:47.953 Timeout set to expire in 40 min 00:01:47.954 [Pipeline] { 00:01:47.968 [Pipeline] sh 00:01:48.245 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:48.812 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:48.825 [Pipeline] sh 00:01:49.103 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:49.375 [Pipeline] sh 00:01:49.656 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:49.931 [Pipeline] sh 00:01:50.210 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:50.469 ++ readlink -f spdk_repo 00:01:50.469 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:50.469 + [[ -n /home/vagrant/spdk_repo ]] 00:01:50.469 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:50.469 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:50.469 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:50.469 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:50.469 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:50.469 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:50.469 + cd /home/vagrant/spdk_repo 00:01:50.469 + source /etc/os-release 00:01:50.469 ++ NAME='Fedora Linux' 00:01:50.469 ++ VERSION='38 (Cloud Edition)' 00:01:50.469 ++ ID=fedora 00:01:50.469 ++ VERSION_ID=38 00:01:50.469 ++ VERSION_CODENAME= 00:01:50.469 ++ PLATFORM_ID=platform:f38 00:01:50.469 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:50.469 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:50.469 ++ LOGO=fedora-logo-icon 00:01:50.469 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:50.469 ++ HOME_URL=https://fedoraproject.org/ 00:01:50.469 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:50.469 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:50.469 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:50.469 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:50.469 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:50.469 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:50.469 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:50.469 ++ SUPPORT_END=2024-05-14 00:01:50.469 ++ VARIANT='Cloud Edition' 00:01:50.469 ++ VARIANT_ID=cloud 00:01:50.469 + uname -a 00:01:50.469 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:50.469 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:50.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:50.987 Hugepages 00:01:50.987 node hugesize free / total 00:01:50.987 node0 1048576kB 0 / 0 00:01:50.987 node0 2048kB 0 / 0 00:01:50.987 00:01:50.987 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:50.987 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:50.987 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:50.987 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:51.246 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:51.246 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:51.246 + rm -f /tmp/spdk-ld-path 00:01:51.246 + source autorun-spdk.conf 00:01:51.246 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.246 ++ SPDK_TEST_NVME=1 00:01:51.246 ++ SPDK_TEST_FTL=1 00:01:51.246 ++ SPDK_TEST_ISAL=1 00:01:51.246 ++ SPDK_RUN_ASAN=1 00:01:51.246 ++ SPDK_RUN_UBSAN=1 00:01:51.246 ++ SPDK_TEST_XNVME=1 00:01:51.246 ++ SPDK_TEST_NVME_FDP=1 00:01:51.246 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.246 ++ RUN_NIGHTLY=1 00:01:51.246 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:51.246 + [[ -n '' ]] 00:01:51.246 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:51.246 + for M in /var/spdk/build-*-manifest.txt 00:01:51.246 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:51.246 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.246 + for M in /var/spdk/build-*-manifest.txt 00:01:51.246 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:51.246 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:51.246 ++ uname 00:01:51.246 + [[ Linux == \L\i\n\u\x ]] 00:01:51.246 + sudo dmesg -T 00:01:51.246 + sudo dmesg --clear 00:01:51.246 + dmesg_pid=5204 00:01:51.246 + [[ Fedora Linux == FreeBSD ]] 00:01:51.246 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.246 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:51.246 + sudo dmesg -Tw 00:01:51.246 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:51.246 + [[ -x /usr/src/fio-static/fio ]] 00:01:51.246 + export FIO_BIN=/usr/src/fio-static/fio 00:01:51.246 + FIO_BIN=/usr/src/fio-static/fio 00:01:51.246 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:51.246 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:51.247 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:51.247 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.247 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:51.247 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:51.247 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.247 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:51.247 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:51.247 Test configuration: 00:01:51.247 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.247 SPDK_TEST_NVME=1 00:01:51.247 SPDK_TEST_FTL=1 00:01:51.247 SPDK_TEST_ISAL=1 00:01:51.247 SPDK_RUN_ASAN=1 00:01:51.247 SPDK_RUN_UBSAN=1 00:01:51.247 SPDK_TEST_XNVME=1 00:01:51.247 SPDK_TEST_NVME_FDP=1 00:01:51.247 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:51.247 RUN_NIGHTLY=1 14:07:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:51.247 14:07:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:51.247 14:07:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:51.247 14:07:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:51.247 14:07:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.247 14:07:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.247 14:07:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.247 14:07:10 -- paths/export.sh@5 -- $ export PATH 00:01:51.247 14:07:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:51.247 14:07:10 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:51.247 14:07:10 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:51.247 14:07:10 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722002830.XXXXXX 00:01:51.247 14:07:10 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722002830.FnKyK7 00:01:51.247 14:07:10 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:51.247 14:07:10 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:51.247 14:07:10 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:51.247 14:07:10 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:51.247 14:07:10 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:51.247 14:07:10 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:51.247 14:07:10 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:51.247 14:07:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:51.506 14:07:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:51.506 14:07:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:51.506 14:07:11 -- pm/common@17 -- $ local monitor 00:01:51.506 14:07:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.506 14:07:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:51.506 14:07:11 -- pm/common@25 -- $ sleep 1 00:01:51.506 14:07:11 -- pm/common@21 -- $ date +%s 00:01:51.506 14:07:11 -- pm/common@21 -- $ date +%s 00:01:51.506 14:07:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1722002831 00:01:51.506 14:07:11 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1722002831 00:01:51.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1722002831_collect-vmstat.pm.log 00:01:51.506 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1722002831_collect-cpu-load.pm.log 00:01:52.444 14:07:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:52.444 14:07:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:52.444 14:07:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:52.444 14:07:12 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:52.444 14:07:12 -- spdk/autobuild.sh@16 -- $ date -u 00:01:52.444 Fri Jul 26 02:07:12 PM UTC 2024 00:01:52.444 14:07:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:52.444 v24.09-pre-321-g704257090 00:01:52.444 14:07:12 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:52.444 14:07:12 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:52.444 14:07:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.444 14:07:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.444 14:07:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.444 ************************************ 00:01:52.444 START TEST asan 00:01:52.444 ************************************ 00:01:52.444 using asan 00:01:52.444 14:07:12 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:52.444 00:01:52.444 real 0m0.000s 00:01:52.444 user 0m0.000s 00:01:52.444 sys 0m0.000s 00:01:52.444 14:07:12 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.444 ************************************ 00:01:52.444 END TEST asan 00:01:52.445 ************************************ 00:01:52.445 14:07:12 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.445 14:07:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:52.445 14:07:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:52.445 14:07:12 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:52.445 14:07:12 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:52.445 14:07:12 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.445 ************************************ 00:01:52.445 START TEST ubsan 00:01:52.445 ************************************ 00:01:52.445 using ubsan 00:01:52.445 14:07:12 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:52.445 00:01:52.445 real 0m0.000s 00:01:52.445 user 0m0.000s 00:01:52.445 sys 0m0.000s 00:01:52.445 14:07:12 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:52.445 ************************************ 00:01:52.445 END TEST ubsan 00:01:52.445 ************************************ 00:01:52.445 14:07:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:52.445 14:07:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:52.445 14:07:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:52.445 14:07:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:52.445 14:07:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:52.703 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:52.703 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:52.961 Using 'verbs' RDMA provider 00:02:06.532 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:21.410 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:21.410 Creating mk/config.mk...done. 00:02:21.410 Creating mk/cc.flags.mk...done. 00:02:21.410 Type 'make' to build. 00:02:21.410 14:07:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:21.410 14:07:38 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:21.410 14:07:38 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:21.410 14:07:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.410 ************************************ 00:02:21.410 START TEST make 00:02:21.410 ************************************ 00:02:21.410 14:07:38 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:21.410 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:21.410 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:21.410 meson setup builddir \ 00:02:21.410 -Dwith-libaio=enabled \ 00:02:21.410 -Dwith-liburing=enabled \ 00:02:21.410 -Dwith-libvfn=disabled \ 00:02:21.410 -Dwith-spdk=false && \ 00:02:21.410 meson compile -C builddir && \ 00:02:21.410 cd -) 00:02:21.410 make[1]: Nothing to be done for 'all'. 00:02:22.783 The Meson build system 00:02:22.783 Version: 1.3.1 00:02:22.783 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:22.783 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:22.783 Build type: native build 00:02:22.783 Project name: xnvme 00:02:22.783 Project version: 0.7.3 00:02:22.783 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:22.783 C linker for the host machine: cc ld.bfd 2.39-16 00:02:22.783 Host machine cpu family: x86_64 00:02:22.783 Host machine cpu: x86_64 00:02:22.783 Message: host_machine.system: linux 00:02:22.783 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:22.783 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:22.783 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:22.783 Run-time dependency threads found: YES 00:02:22.783 Has header "setupapi.h" : NO 00:02:22.783 Has header "linux/blkzoned.h" : YES 00:02:22.783 Has header "linux/blkzoned.h" : YES (cached) 00:02:22.783 Has header "libaio.h" : YES 00:02:22.783 Library aio found: YES 00:02:22.783 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:22.783 Run-time dependency liburing found: YES 2.2 00:02:22.783 Dependency libvfn skipped: feature with-libvfn disabled 00:02:22.783 Run-time dependency appleframeworks found: NO (tried framework) 00:02:22.783 Run-time dependency appleframeworks found: NO (tried framework) 00:02:22.783 Configuring xnvme_config.h using configuration 00:02:22.783 Configuring xnvme.spec using configuration 00:02:22.783 Run-time dependency bash-completion found: YES 2.11 00:02:22.783 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:22.783 Program cp found: YES (/usr/bin/cp) 00:02:22.783 Has header "winsock2.h" : NO 00:02:22.783 Has header "dbghelp.h" : NO 00:02:22.783 Library rpcrt4 found: NO 00:02:22.783 Library rt found: YES 00:02:22.783 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:22.783 Found CMake: /usr/bin/cmake (3.27.7) 00:02:22.783 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:22.783 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:22.783 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:22.783 Build targets in project: 32 00:02:22.783 00:02:22.783 xnvme 0.7.3 00:02:22.783 00:02:22.784 User defined options 00:02:22.784 with-libaio : enabled 00:02:22.784 with-liburing: enabled 00:02:22.784 with-libvfn : disabled 00:02:22.784 with-spdk : false 00:02:22.784 00:02:22.784 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.784 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:23.041 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:23.041 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:23.041 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:23.041 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:23.041 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:23.041 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:23.041 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:23.041 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:23.041 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:23.041 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:23.041 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:23.041 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:23.041 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:23.300 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:23.300 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:23.300 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:23.300 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:23.300 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:23.300 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:23.300 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:23.300 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:23.300 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:23.300 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:23.300 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:23.300 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:23.300 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:23.300 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:23.300 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:23.300 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:23.300 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:23.300 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:23.300 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:23.300 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:23.300 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:23.300 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:23.300 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:23.300 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:23.300 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:23.300 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:23.300 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:23.558 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:23.558 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:23.558 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:23.558 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:23.558 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:23.558 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:23.558 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:23.558 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:23.558 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:23.558 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:23.558 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:23.558 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:23.558 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:23.558 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:23.558 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:23.558 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:23.558 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:23.558 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:23.558 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:23.558 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:23.558 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:23.558 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:23.816 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:23.816 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:23.816 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:23.816 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:23.816 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:23.816 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:23.816 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:23.816 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:23.816 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:23.816 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:23.816 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:23.816 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:23.816 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:23.816 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:23.816 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:23.816 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:23.816 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:24.074 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:24.074 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:24.074 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:24.074 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:24.074 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:24.074 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:24.074 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:24.074 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:24.074 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:24.074 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:24.074 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:24.074 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:24.074 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:24.074 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:24.074 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:24.074 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:24.074 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:24.332 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:24.332 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:24.332 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:24.332 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:24.332 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:24.332 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:24.332 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:24.332 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:24.332 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:24.332 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:24.332 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:24.332 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:24.332 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:24.332 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:24.332 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:24.332 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:24.332 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:24.332 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:24.332 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:24.332 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:24.332 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:24.332 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:24.332 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:24.332 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:24.332 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:24.332 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:24.332 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:24.332 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:24.589 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:24.589 [126/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:24.589 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:24.589 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:24.589 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:24.589 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:24.589 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:24.589 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:24.589 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:24.589 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:24.589 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:24.589 [136/203] Linking target lib/libxnvme.so 00:02:24.589 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:24.589 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:24.589 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:24.589 [140/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:24.589 [141/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:24.589 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:24.589 [143/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:24.847 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:24.847 [145/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:24.847 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:24.847 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:24.847 [148/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:24.847 [149/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:24.847 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:24.848 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:24.848 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:24.848 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:24.848 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:24.848 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:24.848 [156/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:25.106 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:25.106 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:25.106 [159/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:25.106 [160/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:25.106 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:25.106 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:25.106 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:25.106 [164/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:25.106 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:25.106 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:25.106 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:25.106 [168/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:25.364 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:25.364 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:25.364 [171/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:25.364 [172/203] Linking static target lib/libxnvme.a 00:02:25.364 [173/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:25.364 [174/203] Linking target tests/xnvme_tests_buf 00:02:25.364 [175/203] Linking target tests/xnvme_tests_cli 00:02:25.364 [176/203] Linking target tests/xnvme_tests_ioworker 00:02:25.364 [177/203] Linking target tests/xnvme_tests_enum 00:02:25.364 [178/203] Linking target tests/xnvme_tests_lblk 00:02:25.364 [179/203] Linking target tests/xnvme_tests_async_intf 00:02:25.364 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:25.364 [181/203] Linking target tests/xnvme_tests_scc 00:02:25.364 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:25.364 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:25.364 [184/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:25.364 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:02:25.622 [186/203] Linking target tools/zoned 00:02:25.622 [187/203] Linking target tests/xnvme_tests_znd_state 00:02:25.622 [188/203] Linking target tests/xnvme_tests_kvs 00:02:25.622 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:25.622 [190/203] Linking target tests/xnvme_tests_map 00:02:25.622 [191/203] Linking target examples/xnvme_dev 00:02:25.622 [192/203] Linking target tools/kvs 00:02:25.622 [193/203] Linking target tools/lblk 00:02:25.622 [194/203] Linking target tools/xnvme_file 00:02:25.622 [195/203] Linking target examples/xnvme_enum 00:02:25.622 [196/203] Linking target tools/xnvme 00:02:25.622 [197/203] Linking target tools/xdd 00:02:25.622 [198/203] Linking target examples/zoned_io_async 00:02:25.622 [199/203] Linking target examples/xnvme_io_async 00:02:25.622 [200/203] Linking target examples/zoned_io_sync 00:02:25.622 [201/203] Linking target examples/xnvme_single_sync 00:02:25.622 [202/203] Linking target examples/xnvme_hello 00:02:25.622 [203/203] Linking target examples/xnvme_single_async 00:02:25.622 INFO: autodetecting backend as ninja 00:02:25.622 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:25.622 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:33.728 The Meson build system 00:02:33.728 Version: 1.3.1 00:02:33.728 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:33.728 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:33.728 Build type: native build 00:02:33.728 Program cat found: YES (/usr/bin/cat) 00:02:33.728 Project name: DPDK 00:02:33.728 Project version: 24.03.0 00:02:33.728 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:33.728 C linker for the host machine: cc ld.bfd 2.39-16 00:02:33.728 Host machine cpu family: x86_64 00:02:33.728 Host machine cpu: x86_64 00:02:33.728 Message: ## Building in Developer Mode ## 00:02:33.728 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:33.728 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:33.728 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:33.728 Program python3 found: YES (/usr/bin/python3) 00:02:33.728 Program cat found: YES (/usr/bin/cat) 00:02:33.728 Compiler for C supports arguments -march=native: YES 00:02:33.728 Checking for size of "void *" : 8 00:02:33.728 Checking for size of "void *" : 8 (cached) 00:02:33.728 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:33.728 Library m found: YES 00:02:33.728 Library numa found: YES 00:02:33.728 Has header "numaif.h" : YES 00:02:33.728 Library fdt found: NO 00:02:33.728 Library execinfo found: NO 00:02:33.728 Has header "execinfo.h" : YES 00:02:33.728 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:33.728 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:33.728 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:33.728 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:33.728 Run-time dependency openssl found: YES 3.0.9 00:02:33.728 Run-time dependency libpcap found: YES 1.10.4 00:02:33.728 Has header "pcap.h" with dependency libpcap: YES 00:02:33.728 Compiler for C supports arguments -Wcast-qual: YES 00:02:33.728 Compiler for C supports arguments -Wdeprecated: YES 00:02:33.728 Compiler for C supports arguments -Wformat: YES 00:02:33.728 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:33.728 Compiler for C supports arguments -Wformat-security: NO 00:02:33.728 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:33.728 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:33.728 Compiler for C supports arguments -Wnested-externs: YES 00:02:33.728 Compiler for C supports arguments -Wold-style-definition: YES 00:02:33.728 Compiler for C supports arguments -Wpointer-arith: YES 00:02:33.728 Compiler for C supports arguments -Wsign-compare: YES 00:02:33.728 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:33.728 Compiler for C supports arguments -Wundef: YES 00:02:33.728 Compiler for C supports arguments -Wwrite-strings: YES 00:02:33.728 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:33.728 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:33.728 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:33.728 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:33.728 Program objdump found: YES (/usr/bin/objdump) 00:02:33.728 Compiler for C supports arguments -mavx512f: YES 00:02:33.728 Checking if "AVX512 checking" compiles: YES 00:02:33.728 Fetching value of define "__SSE4_2__" : 1 00:02:33.728 Fetching value of define "__AES__" : 1 00:02:33.728 Fetching value of define "__AVX__" : 1 00:02:33.728 Fetching value of define "__AVX2__" : 1 00:02:33.728 Fetching value of define "__AVX512BW__" : (undefined) 00:02:33.728 Fetching value of define "__AVX512CD__" : (undefined) 00:02:33.728 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:33.728 Fetching value of define "__AVX512F__" : (undefined) 00:02:33.728 Fetching value of define "__AVX512VL__" : (undefined) 00:02:33.729 Fetching value of define "__PCLMUL__" : 1 00:02:33.729 Fetching value of define "__RDRND__" : 1 00:02:33.729 Fetching value of define "__RDSEED__" : 1 00:02:33.729 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:33.729 Fetching value of define "__znver1__" : (undefined) 00:02:33.729 Fetching value of define "__znver2__" : (undefined) 00:02:33.729 Fetching value of define "__znver3__" : (undefined) 00:02:33.729 Fetching value of define "__znver4__" : (undefined) 00:02:33.729 Library asan found: YES 00:02:33.729 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:33.729 Message: lib/log: Defining dependency "log" 00:02:33.729 Message: lib/kvargs: Defining dependency "kvargs" 00:02:33.729 Message: lib/telemetry: Defining dependency "telemetry" 00:02:33.729 Library rt found: YES 00:02:33.729 Checking for function "getentropy" : NO 00:02:33.729 Message: lib/eal: Defining dependency "eal" 00:02:33.729 Message: lib/ring: Defining dependency "ring" 00:02:33.729 Message: lib/rcu: Defining dependency "rcu" 00:02:33.729 Message: lib/mempool: Defining dependency "mempool" 00:02:33.729 Message: lib/mbuf: Defining dependency "mbuf" 00:02:33.729 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:33.729 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:33.729 Compiler for C supports arguments -mpclmul: YES 00:02:33.729 Compiler for C supports arguments -maes: YES 00:02:33.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:33.729 Compiler for C supports arguments -mavx512bw: YES 00:02:33.729 Compiler for C supports arguments -mavx512dq: YES 00:02:33.729 Compiler for C supports arguments -mavx512vl: YES 00:02:33.729 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:33.729 Compiler for C supports arguments -mavx2: YES 00:02:33.729 Compiler for C supports arguments -mavx: YES 00:02:33.729 Message: lib/net: Defining dependency "net" 00:02:33.729 Message: lib/meter: Defining dependency "meter" 00:02:33.729 Message: lib/ethdev: Defining dependency "ethdev" 00:02:33.729 Message: lib/pci: Defining dependency "pci" 00:02:33.729 Message: lib/cmdline: Defining dependency "cmdline" 00:02:33.729 Message: lib/hash: Defining dependency "hash" 00:02:33.729 Message: lib/timer: Defining dependency "timer" 00:02:33.729 Message: lib/compressdev: Defining dependency "compressdev" 00:02:33.729 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:33.729 Message: lib/dmadev: Defining dependency "dmadev" 00:02:33.729 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:33.729 Message: lib/power: Defining dependency "power" 00:02:33.729 Message: lib/reorder: Defining dependency "reorder" 00:02:33.729 Message: lib/security: Defining dependency "security" 00:02:33.729 Has header "linux/userfaultfd.h" : YES 00:02:33.729 Has header "linux/vduse.h" : YES 00:02:33.729 Message: lib/vhost: Defining dependency "vhost" 00:02:33.729 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:33.729 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:33.729 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:33.729 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:33.729 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:33.729 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:33.729 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:33.729 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:33.729 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:33.729 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:33.729 Program doxygen found: YES (/usr/bin/doxygen) 00:02:33.729 Configuring doxy-api-html.conf using configuration 00:02:33.729 Configuring doxy-api-man.conf using configuration 00:02:33.729 Program mandb found: YES (/usr/bin/mandb) 00:02:33.729 Program sphinx-build found: NO 00:02:33.729 Configuring rte_build_config.h using configuration 00:02:33.729 Message: 00:02:33.729 ================= 00:02:33.729 Applications Enabled 00:02:33.729 ================= 00:02:33.729 00:02:33.729 apps: 00:02:33.729 00:02:33.729 00:02:33.729 Message: 00:02:33.729 ================= 00:02:33.729 Libraries Enabled 00:02:33.729 ================= 00:02:33.729 00:02:33.729 libs: 00:02:33.729 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:33.729 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:33.729 cryptodev, dmadev, power, reorder, security, vhost, 00:02:33.729 00:02:33.729 Message: 00:02:33.729 =============== 00:02:33.729 Drivers Enabled 00:02:33.729 =============== 00:02:33.729 00:02:33.729 common: 00:02:33.729 00:02:33.729 bus: 00:02:33.729 pci, vdev, 00:02:33.729 mempool: 00:02:33.729 ring, 00:02:33.729 dma: 00:02:33.729 00:02:33.729 net: 00:02:33.729 00:02:33.729 crypto: 00:02:33.729 00:02:33.729 compress: 00:02:33.729 00:02:33.729 vdpa: 00:02:33.729 00:02:33.729 00:02:33.729 Message: 00:02:33.729 ================= 00:02:33.729 Content Skipped 00:02:33.729 ================= 00:02:33.729 00:02:33.729 apps: 00:02:33.729 dumpcap: explicitly disabled via build config 00:02:33.729 graph: explicitly disabled via build config 00:02:33.729 pdump: explicitly disabled via build config 00:02:33.729 proc-info: explicitly disabled via build config 00:02:33.729 test-acl: explicitly disabled via build config 00:02:33.729 test-bbdev: explicitly disabled via build config 00:02:33.729 test-cmdline: explicitly disabled via build config 00:02:33.729 test-compress-perf: explicitly disabled via build config 00:02:33.729 test-crypto-perf: explicitly disabled via build config 00:02:33.729 test-dma-perf: explicitly disabled via build config 00:02:33.729 test-eventdev: explicitly disabled via build config 00:02:33.729 test-fib: explicitly disabled via build config 00:02:33.729 test-flow-perf: explicitly disabled via build config 00:02:33.729 test-gpudev: explicitly disabled via build config 00:02:33.729 test-mldev: explicitly disabled via build config 00:02:33.729 test-pipeline: explicitly disabled via build config 00:02:33.729 test-pmd: explicitly disabled via build config 00:02:33.729 test-regex: explicitly disabled via build config 00:02:33.729 test-sad: explicitly disabled via build config 00:02:33.729 test-security-perf: explicitly disabled via build config 00:02:33.729 00:02:33.729 libs: 00:02:33.729 argparse: explicitly disabled via build config 00:02:33.729 metrics: explicitly disabled via build config 00:02:33.729 acl: explicitly disabled via build config 00:02:33.729 bbdev: explicitly disabled via build config 00:02:33.729 bitratestats: explicitly disabled via build config 00:02:33.729 bpf: explicitly disabled via build config 00:02:33.729 cfgfile: explicitly disabled via build config 00:02:33.729 distributor: explicitly disabled via build config 00:02:33.729 efd: explicitly disabled via build config 00:02:33.729 eventdev: explicitly disabled via build config 00:02:33.729 dispatcher: explicitly disabled via build config 00:02:33.729 gpudev: explicitly disabled via build config 00:02:33.729 gro: explicitly disabled via build config 00:02:33.729 gso: explicitly disabled via build config 00:02:33.729 ip_frag: explicitly disabled via build config 00:02:33.729 jobstats: explicitly disabled via build config 00:02:33.729 latencystats: explicitly disabled via build config 00:02:33.729 lpm: explicitly disabled via build config 00:02:33.729 member: explicitly disabled via build config 00:02:33.729 pcapng: explicitly disabled via build config 00:02:33.729 rawdev: explicitly disabled via build config 00:02:33.729 regexdev: explicitly disabled via build config 00:02:33.729 mldev: explicitly disabled via build config 00:02:33.729 rib: explicitly disabled via build config 00:02:33.729 sched: explicitly disabled via build config 00:02:33.729 stack: explicitly disabled via build config 00:02:33.729 ipsec: explicitly disabled via build config 00:02:33.729 pdcp: explicitly disabled via build config 00:02:33.729 fib: explicitly disabled via build config 00:02:33.729 port: explicitly disabled via build config 00:02:33.729 pdump: explicitly disabled via build config 00:02:33.729 table: explicitly disabled via build config 00:02:33.729 pipeline: explicitly disabled via build config 00:02:33.729 graph: explicitly disabled via build config 00:02:33.729 node: explicitly disabled via build config 00:02:33.729 00:02:33.729 drivers: 00:02:33.729 common/cpt: not in enabled drivers build config 00:02:33.729 common/dpaax: not in enabled drivers build config 00:02:33.729 common/iavf: not in enabled drivers build config 00:02:33.729 common/idpf: not in enabled drivers build config 00:02:33.729 common/ionic: not in enabled drivers build config 00:02:33.729 common/mvep: not in enabled drivers build config 00:02:33.729 common/octeontx: not in enabled drivers build config 00:02:33.729 bus/auxiliary: not in enabled drivers build config 00:02:33.729 bus/cdx: not in enabled drivers build config 00:02:33.729 bus/dpaa: not in enabled drivers build config 00:02:33.729 bus/fslmc: not in enabled drivers build config 00:02:33.729 bus/ifpga: not in enabled drivers build config 00:02:33.729 bus/platform: not in enabled drivers build config 00:02:33.729 bus/uacce: not in enabled drivers build config 00:02:33.729 bus/vmbus: not in enabled drivers build config 00:02:33.729 common/cnxk: not in enabled drivers build config 00:02:33.729 common/mlx5: not in enabled drivers build config 00:02:33.730 common/nfp: not in enabled drivers build config 00:02:33.730 common/nitrox: not in enabled drivers build config 00:02:33.730 common/qat: not in enabled drivers build config 00:02:33.730 common/sfc_efx: not in enabled drivers build config 00:02:33.730 mempool/bucket: not in enabled drivers build config 00:02:33.730 mempool/cnxk: not in enabled drivers build config 00:02:33.730 mempool/dpaa: not in enabled drivers build config 00:02:33.730 mempool/dpaa2: not in enabled drivers build config 00:02:33.730 mempool/octeontx: not in enabled drivers build config 00:02:33.730 mempool/stack: not in enabled drivers build config 00:02:33.730 dma/cnxk: not in enabled drivers build config 00:02:33.730 dma/dpaa: not in enabled drivers build config 00:02:33.730 dma/dpaa2: not in enabled drivers build config 00:02:33.730 dma/hisilicon: not in enabled drivers build config 00:02:33.730 dma/idxd: not in enabled drivers build config 00:02:33.730 dma/ioat: not in enabled drivers build config 00:02:33.730 dma/skeleton: not in enabled drivers build config 00:02:33.730 net/af_packet: not in enabled drivers build config 00:02:33.730 net/af_xdp: not in enabled drivers build config 00:02:33.730 net/ark: not in enabled drivers build config 00:02:33.730 net/atlantic: not in enabled drivers build config 00:02:33.730 net/avp: not in enabled drivers build config 00:02:33.730 net/axgbe: not in enabled drivers build config 00:02:33.730 net/bnx2x: not in enabled drivers build config 00:02:33.730 net/bnxt: not in enabled drivers build config 00:02:33.730 net/bonding: not in enabled drivers build config 00:02:33.730 net/cnxk: not in enabled drivers build config 00:02:33.730 net/cpfl: not in enabled drivers build config 00:02:33.730 net/cxgbe: not in enabled drivers build config 00:02:33.730 net/dpaa: not in enabled drivers build config 00:02:33.730 net/dpaa2: not in enabled drivers build config 00:02:33.730 net/e1000: not in enabled drivers build config 00:02:33.730 net/ena: not in enabled drivers build config 00:02:33.730 net/enetc: not in enabled drivers build config 00:02:33.730 net/enetfec: not in enabled drivers build config 00:02:33.730 net/enic: not in enabled drivers build config 00:02:33.730 net/failsafe: not in enabled drivers build config 00:02:33.730 net/fm10k: not in enabled drivers build config 00:02:33.730 net/gve: not in enabled drivers build config 00:02:33.730 net/hinic: not in enabled drivers build config 00:02:33.730 net/hns3: not in enabled drivers build config 00:02:33.730 net/i40e: not in enabled drivers build config 00:02:33.730 net/iavf: not in enabled drivers build config 00:02:33.730 net/ice: not in enabled drivers build config 00:02:33.730 net/idpf: not in enabled drivers build config 00:02:33.730 net/igc: not in enabled drivers build config 00:02:33.730 net/ionic: not in enabled drivers build config 00:02:33.730 net/ipn3ke: not in enabled drivers build config 00:02:33.730 net/ixgbe: not in enabled drivers build config 00:02:33.730 net/mana: not in enabled drivers build config 00:02:33.730 net/memif: not in enabled drivers build config 00:02:33.730 net/mlx4: not in enabled drivers build config 00:02:33.730 net/mlx5: not in enabled drivers build config 00:02:33.730 net/mvneta: not in enabled drivers build config 00:02:33.730 net/mvpp2: not in enabled drivers build config 00:02:33.730 net/netvsc: not in enabled drivers build config 00:02:33.730 net/nfb: not in enabled drivers build config 00:02:33.730 net/nfp: not in enabled drivers build config 00:02:33.730 net/ngbe: not in enabled drivers build config 00:02:33.730 net/null: not in enabled drivers build config 00:02:33.730 net/octeontx: not in enabled drivers build config 00:02:33.730 net/octeon_ep: not in enabled drivers build config 00:02:33.730 net/pcap: not in enabled drivers build config 00:02:33.730 net/pfe: not in enabled drivers build config 00:02:33.730 net/qede: not in enabled drivers build config 00:02:33.730 net/ring: not in enabled drivers build config 00:02:33.730 net/sfc: not in enabled drivers build config 00:02:33.730 net/softnic: not in enabled drivers build config 00:02:33.730 net/tap: not in enabled drivers build config 00:02:33.730 net/thunderx: not in enabled drivers build config 00:02:33.730 net/txgbe: not in enabled drivers build config 00:02:33.730 net/vdev_netvsc: not in enabled drivers build config 00:02:33.730 net/vhost: not in enabled drivers build config 00:02:33.730 net/virtio: not in enabled drivers build config 00:02:33.730 net/vmxnet3: not in enabled drivers build config 00:02:33.730 raw/*: missing internal dependency, "rawdev" 00:02:33.730 crypto/armv8: not in enabled drivers build config 00:02:33.730 crypto/bcmfs: not in enabled drivers build config 00:02:33.730 crypto/caam_jr: not in enabled drivers build config 00:02:33.730 crypto/ccp: not in enabled drivers build config 00:02:33.730 crypto/cnxk: not in enabled drivers build config 00:02:33.730 crypto/dpaa_sec: not in enabled drivers build config 00:02:33.730 crypto/dpaa2_sec: not in enabled drivers build config 00:02:33.730 crypto/ipsec_mb: not in enabled drivers build config 00:02:33.730 crypto/mlx5: not in enabled drivers build config 00:02:33.730 crypto/mvsam: not in enabled drivers build config 00:02:33.730 crypto/nitrox: not in enabled drivers build config 00:02:33.730 crypto/null: not in enabled drivers build config 00:02:33.730 crypto/octeontx: not in enabled drivers build config 00:02:33.730 crypto/openssl: not in enabled drivers build config 00:02:33.730 crypto/scheduler: not in enabled drivers build config 00:02:33.730 crypto/uadk: not in enabled drivers build config 00:02:33.730 crypto/virtio: not in enabled drivers build config 00:02:33.730 compress/isal: not in enabled drivers build config 00:02:33.730 compress/mlx5: not in enabled drivers build config 00:02:33.730 compress/nitrox: not in enabled drivers build config 00:02:33.730 compress/octeontx: not in enabled drivers build config 00:02:33.730 compress/zlib: not in enabled drivers build config 00:02:33.730 regex/*: missing internal dependency, "regexdev" 00:02:33.730 ml/*: missing internal dependency, "mldev" 00:02:33.730 vdpa/ifc: not in enabled drivers build config 00:02:33.730 vdpa/mlx5: not in enabled drivers build config 00:02:33.730 vdpa/nfp: not in enabled drivers build config 00:02:33.730 vdpa/sfc: not in enabled drivers build config 00:02:33.730 event/*: missing internal dependency, "eventdev" 00:02:33.730 baseband/*: missing internal dependency, "bbdev" 00:02:33.730 gpu/*: missing internal dependency, "gpudev" 00:02:33.730 00:02:33.730 00:02:33.730 Build targets in project: 85 00:02:33.730 00:02:33.730 DPDK 24.03.0 00:02:33.730 00:02:33.730 User defined options 00:02:33.730 buildtype : debug 00:02:33.730 default_library : shared 00:02:33.730 libdir : lib 00:02:33.730 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:33.730 b_sanitize : address 00:02:33.730 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:33.730 c_link_args : 00:02:33.730 cpu_instruction_set: native 00:02:33.730 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:33.730 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:33.730 enable_docs : false 00:02:33.730 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:33.730 enable_kmods : false 00:02:33.730 max_lcores : 128 00:02:33.730 tests : false 00:02:33.730 00:02:33.730 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:33.730 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:33.730 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:33.730 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:33.730 [3/268] Linking static target lib/librte_kvargs.a 00:02:33.730 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:33.730 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:33.730 [6/268] Linking static target lib/librte_log.a 00:02:34.296 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.296 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.296 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.553 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.553 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.553 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.553 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.812 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.812 [15/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.812 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.812 [17/268] Linking target lib/librte_log.so.24.1 00:02:34.812 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.812 [19/268] Linking static target lib/librte_telemetry.a 00:02:35.070 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:35.070 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.070 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:35.328 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:35.328 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:35.328 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:35.328 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.587 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:35.587 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:35.587 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.845 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:35.845 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.845 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:35.845 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:35.845 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:35.845 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:36.103 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:36.103 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:36.363 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:36.363 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:36.363 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:36.363 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:36.363 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:36.622 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:36.622 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:36.880 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:36.880 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:36.880 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:36.880 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:36.880 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.138 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.138 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.397 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.397 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.397 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.656 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:37.656 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:37.656 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:37.656 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.656 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.914 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.914 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:38.172 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:38.172 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:38.430 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:38.430 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:38.430 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:38.430 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:38.687 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:38.944 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:38.944 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.944 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:38.944 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:38.944 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.202 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.202 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.202 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:39.202 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.461 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:39.461 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:39.461 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:39.718 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:39.719 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:39.719 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:39.719 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:39.976 [85/268] Linking static target lib/librte_ring.a 00:02:39.976 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:39.976 [87/268] Linking static target lib/librte_eal.a 00:02:40.235 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.235 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.235 [90/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.235 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.493 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.493 [93/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.493 [94/268] Linking static target lib/librte_rcu.a 00:02:40.493 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.493 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.493 [97/268] Linking static target lib/librte_mempool.a 00:02:40.750 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.750 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.750 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.008 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.008 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.008 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.266 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.266 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.266 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.266 [107/268] Linking static target lib/librte_mbuf.a 00:02:41.525 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.525 [109/268] Linking static target lib/librte_meter.a 00:02:41.525 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.525 [111/268] Linking static target lib/librte_net.a 00:02:41.783 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.783 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.041 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.041 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.041 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.041 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.041 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.298 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.555 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.813 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.813 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.076 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.346 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.346 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.346 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.346 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.346 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.346 [129/268] Linking static target lib/librte_pci.a 00:02:43.346 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.346 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.604 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.604 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.604 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.604 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.604 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.604 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.604 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.861 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.861 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.861 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.861 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.861 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:44.119 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.119 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.119 [146/268] Linking static target lib/librte_cmdline.a 00:02:44.377 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.635 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.635 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.635 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.893 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.893 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.893 [153/268] Linking static target lib/librte_timer.a 00:02:45.152 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.152 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.152 [156/268] Linking static target lib/librte_ethdev.a 00:02:45.410 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.410 [158/268] Linking static target lib/librte_hash.a 00:02:45.410 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.410 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.410 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.668 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:45.668 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.668 [164/268] Linking static target lib/librte_compressdev.a 00:02:45.668 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:45.668 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.927 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:46.185 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.185 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.185 [170/268] Linking static target lib/librte_dmadev.a 00:02:46.185 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:46.443 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:46.443 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:46.443 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:46.443 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.701 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.959 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:46.959 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:46.959 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:47.217 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.217 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:47.217 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:47.217 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.217 [184/268] Linking static target lib/librte_cryptodev.a 00:02:47.217 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:47.475 [186/268] Linking static target lib/librte_power.a 00:02:47.733 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.733 [188/268] Linking static target lib/librte_reorder.a 00:02:47.733 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.733 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.990 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.990 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.991 [193/268] Linking static target lib/librte_security.a 00:02:47.991 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.248 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.506 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:48.763 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.763 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.763 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.021 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:49.021 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.021 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.279 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.279 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.279 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.279 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.537 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.795 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.795 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.795 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.795 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.059 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.059 [213/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.059 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.059 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:50.059 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:50.059 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.059 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:50.059 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:50.316 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:50.316 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:50.316 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.316 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:50.574 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.574 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:50.575 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:50.575 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.508 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.508 [229/268] Linking target lib/librte_eal.so.24.1 00:02:51.508 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:51.508 [231/268] Linking target lib/librte_meter.so.24.1 00:02:51.508 [232/268] Linking target lib/librte_pci.so.24.1 00:02:51.508 [233/268] Linking target lib/librte_timer.so.24.1 00:02:51.508 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:51.508 [235/268] Linking target lib/librte_ring.so.24.1 00:02:51.508 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:51.765 [237/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:51.765 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:51.765 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:51.765 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:51.765 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:51.765 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:51.765 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:51.765 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:51.765 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:52.023 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:52.023 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:52.023 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:52.023 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:52.281 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:52.281 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:52.281 [252/268] Linking target lib/librte_net.so.24.1 00:02:52.281 [253/268] Linking target lib/librte_reorder.so.24.1 00:02:52.281 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:52.539 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:52.539 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:52.539 [257/268] Linking target lib/librte_hash.so.24.1 00:02:52.539 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:52.539 [259/268] Linking target lib/librte_security.so.24.1 00:02:52.539 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:52.798 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.056 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:53.056 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:53.315 [264/268] Linking target lib/librte_power.so.24.1 00:02:55.848 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:56.107 [266/268] Linking static target lib/librte_vhost.a 00:02:58.011 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.011 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:58.011 INFO: autodetecting backend as ninja 00:02:58.011 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:59.389 CC lib/ut/ut.o 00:02:59.389 CC lib/ut_mock/mock.o 00:02:59.389 CC lib/log/log.o 00:02:59.389 CC lib/log/log_flags.o 00:02:59.389 CC lib/log/log_deprecated.o 00:02:59.389 LIB libspdk_log.a 00:02:59.389 LIB libspdk_ut_mock.a 00:02:59.389 SO libspdk_ut_mock.so.6.0 00:02:59.389 SO libspdk_log.so.7.0 00:02:59.389 LIB libspdk_ut.a 00:02:59.389 SO libspdk_ut.so.2.0 00:02:59.389 SYMLINK libspdk_ut_mock.so 00:02:59.389 SYMLINK libspdk_log.so 00:02:59.389 SYMLINK libspdk_ut.so 00:02:59.648 CC lib/dma/dma.o 00:02:59.648 CC lib/ioat/ioat.o 00:02:59.648 CXX lib/trace_parser/trace.o 00:02:59.648 CC lib/util/base64.o 00:02:59.648 CC lib/util/bit_array.o 00:02:59.648 CC lib/util/cpuset.o 00:02:59.648 CC lib/util/crc16.o 00:02:59.648 CC lib/util/crc32.o 00:02:59.648 CC lib/util/crc32c.o 00:02:59.907 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.907 CC lib/util/crc32_ieee.o 00:02:59.907 CC lib/util/crc64.o 00:02:59.907 CC lib/util/dif.o 00:02:59.907 CC lib/vfio_user/host/vfio_user.o 00:02:59.907 LIB libspdk_dma.a 00:02:59.907 SO libspdk_dma.so.4.0 00:02:59.907 CC lib/util/fd.o 00:02:59.907 CC lib/util/fd_group.o 00:02:59.907 CC lib/util/file.o 00:02:59.907 CC lib/util/hexlify.o 00:02:59.907 SYMLINK libspdk_dma.so 00:03:00.166 CC lib/util/iov.o 00:03:00.166 LIB libspdk_ioat.a 00:03:00.166 SO libspdk_ioat.so.7.0 00:03:00.166 CC lib/util/math.o 00:03:00.166 LIB libspdk_vfio_user.a 00:03:00.166 CC lib/util/net.o 00:03:00.166 CC lib/util/pipe.o 00:03:00.166 CC lib/util/strerror_tls.o 00:03:00.166 SYMLINK libspdk_ioat.so 00:03:00.166 CC lib/util/string.o 00:03:00.166 SO libspdk_vfio_user.so.5.0 00:03:00.166 CC lib/util/uuid.o 00:03:00.166 SYMLINK libspdk_vfio_user.so 00:03:00.166 CC lib/util/xor.o 00:03:00.166 CC lib/util/zipf.o 00:03:00.732 LIB libspdk_util.a 00:03:00.732 SO libspdk_util.so.10.0 00:03:00.991 LIB libspdk_trace_parser.a 00:03:00.991 SO libspdk_trace_parser.so.5.0 00:03:00.991 SYMLINK libspdk_util.so 00:03:00.991 SYMLINK libspdk_trace_parser.so 00:03:01.250 CC lib/conf/conf.o 00:03:01.250 CC lib/vmd/vmd.o 00:03:01.250 CC lib/json/json_parse.o 00:03:01.250 CC lib/vmd/led.o 00:03:01.250 CC lib/json/json_util.o 00:03:01.250 CC lib/json/json_write.o 00:03:01.250 CC lib/rdma_provider/common.o 00:03:01.250 CC lib/rdma_utils/rdma_utils.o 00:03:01.250 CC lib/idxd/idxd.o 00:03:01.250 CC lib/env_dpdk/env.o 00:03:01.250 CC lib/env_dpdk/memory.o 00:03:01.508 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:01.508 LIB libspdk_conf.a 00:03:01.508 CC lib/env_dpdk/pci.o 00:03:01.508 CC lib/env_dpdk/init.o 00:03:01.508 SO libspdk_conf.so.6.0 00:03:01.508 LIB libspdk_rdma_utils.a 00:03:01.508 LIB libspdk_json.a 00:03:01.508 SO libspdk_rdma_utils.so.1.0 00:03:01.508 SYMLINK libspdk_conf.so 00:03:01.508 CC lib/idxd/idxd_user.o 00:03:01.508 SO libspdk_json.so.6.0 00:03:01.508 SYMLINK libspdk_rdma_utils.so 00:03:01.508 CC lib/idxd/idxd_kernel.o 00:03:01.508 LIB libspdk_rdma_provider.a 00:03:01.508 SYMLINK libspdk_json.so 00:03:01.508 SO libspdk_rdma_provider.so.6.0 00:03:01.766 SYMLINK libspdk_rdma_provider.so 00:03:01.766 CC lib/env_dpdk/threads.o 00:03:01.766 CC lib/env_dpdk/pci_ioat.o 00:03:01.766 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.766 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.766 CC lib/env_dpdk/pci_virtio.o 00:03:01.766 CC lib/env_dpdk/pci_vmd.o 00:03:01.766 CC lib/env_dpdk/pci_idxd.o 00:03:01.766 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.024 LIB libspdk_idxd.a 00:03:02.024 SO libspdk_idxd.so.12.0 00:03:02.024 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.024 LIB libspdk_vmd.a 00:03:02.024 CC lib/env_dpdk/pci_event.o 00:03:02.024 CC lib/env_dpdk/sigbus_handler.o 00:03:02.024 CC lib/env_dpdk/pci_dpdk.o 00:03:02.024 SYMLINK libspdk_idxd.so 00:03:02.024 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.024 SO libspdk_vmd.so.6.0 00:03:02.024 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.024 SYMLINK libspdk_vmd.so 00:03:02.282 LIB libspdk_jsonrpc.a 00:03:02.282 SO libspdk_jsonrpc.so.6.0 00:03:02.282 SYMLINK libspdk_jsonrpc.so 00:03:02.541 CC lib/rpc/rpc.o 00:03:02.799 LIB libspdk_rpc.a 00:03:02.799 SO libspdk_rpc.so.6.0 00:03:03.057 SYMLINK libspdk_rpc.so 00:03:03.057 LIB libspdk_env_dpdk.a 00:03:03.057 SO libspdk_env_dpdk.so.15.0 00:03:03.057 CC lib/trace/trace.o 00:03:03.057 CC lib/notify/notify.o 00:03:03.057 CC lib/trace/trace_flags.o 00:03:03.057 CC lib/notify/notify_rpc.o 00:03:03.057 CC lib/trace/trace_rpc.o 00:03:03.057 CC lib/keyring/keyring.o 00:03:03.057 CC lib/keyring/keyring_rpc.o 00:03:03.315 SYMLINK libspdk_env_dpdk.so 00:03:03.315 LIB libspdk_notify.a 00:03:03.315 SO libspdk_notify.so.6.0 00:03:03.315 SYMLINK libspdk_notify.so 00:03:03.315 LIB libspdk_keyring.a 00:03:03.573 SO libspdk_keyring.so.1.0 00:03:03.573 LIB libspdk_trace.a 00:03:03.573 SYMLINK libspdk_keyring.so 00:03:03.573 SO libspdk_trace.so.10.0 00:03:03.573 SYMLINK libspdk_trace.so 00:03:03.831 CC lib/thread/thread.o 00:03:03.831 CC lib/thread/iobuf.o 00:03:03.831 CC lib/sock/sock.o 00:03:03.831 CC lib/sock/sock_rpc.o 00:03:04.397 LIB libspdk_sock.a 00:03:04.397 SO libspdk_sock.so.10.0 00:03:04.397 SYMLINK libspdk_sock.so 00:03:04.655 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.655 CC lib/nvme/nvme_ctrlr.o 00:03:04.655 CC lib/nvme/nvme_fabric.o 00:03:04.655 CC lib/nvme/nvme_ns_cmd.o 00:03:04.655 CC lib/nvme/nvme_ns.o 00:03:04.655 CC lib/nvme/nvme_pcie_common.o 00:03:04.655 CC lib/nvme/nvme_pcie.o 00:03:04.655 CC lib/nvme/nvme.o 00:03:04.655 CC lib/nvme/nvme_qpair.o 00:03:05.590 CC lib/nvme/nvme_quirks.o 00:03:05.590 CC lib/nvme/nvme_transport.o 00:03:05.590 CC lib/nvme/nvme_discovery.o 00:03:05.848 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:05.848 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:05.848 LIB libspdk_thread.a 00:03:05.848 CC lib/nvme/nvme_tcp.o 00:03:05.848 SO libspdk_thread.so.10.1 00:03:05.848 CC lib/nvme/nvme_opal.o 00:03:06.106 SYMLINK libspdk_thread.so 00:03:06.106 CC lib/nvme/nvme_io_msg.o 00:03:06.106 CC lib/accel/accel.o 00:03:06.106 CC lib/nvme/nvme_poll_group.o 00:03:06.364 CC lib/nvme/nvme_zns.o 00:03:06.364 CC lib/nvme/nvme_stubs.o 00:03:06.364 CC lib/nvme/nvme_auth.o 00:03:06.621 CC lib/nvme/nvme_cuse.o 00:03:06.622 CC lib/nvme/nvme_rdma.o 00:03:06.879 CC lib/blob/blobstore.o 00:03:07.137 CC lib/accel/accel_rpc.o 00:03:07.137 CC lib/init/json_config.o 00:03:07.137 CC lib/virtio/virtio.o 00:03:07.137 CC lib/virtio/virtio_vhost_user.o 00:03:07.395 CC lib/init/subsystem.o 00:03:07.395 CC lib/accel/accel_sw.o 00:03:07.653 CC lib/virtio/virtio_vfio_user.o 00:03:07.653 CC lib/init/subsystem_rpc.o 00:03:07.653 CC lib/virtio/virtio_pci.o 00:03:07.653 CC lib/blob/request.o 00:03:07.653 CC lib/blob/zeroes.o 00:03:07.653 CC lib/blob/blob_bs_dev.o 00:03:07.653 CC lib/init/rpc.o 00:03:07.911 LIB libspdk_accel.a 00:03:07.911 LIB libspdk_init.a 00:03:07.911 SO libspdk_accel.so.16.0 00:03:07.911 LIB libspdk_virtio.a 00:03:07.911 SO libspdk_init.so.5.0 00:03:07.911 SYMLINK libspdk_accel.so 00:03:07.911 SO libspdk_virtio.so.7.0 00:03:07.911 SYMLINK libspdk_init.so 00:03:08.169 SYMLINK libspdk_virtio.so 00:03:08.169 CC lib/bdev/bdev_rpc.o 00:03:08.169 CC lib/bdev/bdev.o 00:03:08.169 CC lib/bdev/bdev_zone.o 00:03:08.169 CC lib/bdev/part.o 00:03:08.169 CC lib/bdev/scsi_nvme.o 00:03:08.169 CC lib/event/app.o 00:03:08.169 CC lib/event/log_rpc.o 00:03:08.169 CC lib/event/reactor.o 00:03:08.428 LIB libspdk_nvme.a 00:03:08.428 CC lib/event/app_rpc.o 00:03:08.428 CC lib/event/scheduler_static.o 00:03:08.428 SO libspdk_nvme.so.13.1 00:03:08.995 LIB libspdk_event.a 00:03:08.995 SO libspdk_event.so.14.0 00:03:08.995 SYMLINK libspdk_nvme.so 00:03:08.995 SYMLINK libspdk_event.so 00:03:11.527 LIB libspdk_blob.a 00:03:11.527 SO libspdk_blob.so.11.0 00:03:11.527 SYMLINK libspdk_blob.so 00:03:11.785 LIB libspdk_bdev.a 00:03:11.785 CC lib/blobfs/blobfs.o 00:03:11.785 CC lib/blobfs/tree.o 00:03:11.785 CC lib/lvol/lvol.o 00:03:11.785 SO libspdk_bdev.so.16.0 00:03:12.043 SYMLINK libspdk_bdev.so 00:03:12.043 CC lib/scsi/dev.o 00:03:12.043 CC lib/scsi/lun.o 00:03:12.302 CC lib/scsi/port.o 00:03:12.302 CC lib/scsi/scsi.o 00:03:12.302 CC lib/nvmf/ctrlr.o 00:03:12.302 CC lib/ftl/ftl_core.o 00:03:12.302 CC lib/ublk/ublk.o 00:03:12.302 CC lib/nbd/nbd.o 00:03:12.302 CC lib/scsi/scsi_bdev.o 00:03:12.302 CC lib/scsi/scsi_pr.o 00:03:12.560 CC lib/scsi/scsi_rpc.o 00:03:12.560 CC lib/scsi/task.o 00:03:12.560 CC lib/ftl/ftl_init.o 00:03:12.560 CC lib/ftl/ftl_layout.o 00:03:12.818 CC lib/nbd/nbd_rpc.o 00:03:12.818 CC lib/ftl/ftl_debug.o 00:03:12.818 CC lib/ftl/ftl_io.o 00:03:12.818 LIB libspdk_blobfs.a 00:03:12.818 CC lib/ftl/ftl_sb.o 00:03:12.818 SO libspdk_blobfs.so.10.0 00:03:13.124 LIB libspdk_scsi.a 00:03:13.124 LIB libspdk_lvol.a 00:03:13.124 LIB libspdk_nbd.a 00:03:13.124 SO libspdk_lvol.so.10.0 00:03:13.124 CC lib/ublk/ublk_rpc.o 00:03:13.124 SYMLINK libspdk_blobfs.so 00:03:13.124 CC lib/nvmf/ctrlr_discovery.o 00:03:13.124 SO libspdk_scsi.so.9.0 00:03:13.124 SO libspdk_nbd.so.7.0 00:03:13.124 CC lib/nvmf/ctrlr_bdev.o 00:03:13.124 SYMLINK libspdk_lvol.so 00:03:13.124 CC lib/nvmf/subsystem.o 00:03:13.124 CC lib/nvmf/nvmf.o 00:03:13.124 CC lib/ftl/ftl_l2p.o 00:03:13.124 SYMLINK libspdk_nbd.so 00:03:13.124 CC lib/nvmf/nvmf_rpc.o 00:03:13.124 CC lib/ftl/ftl_l2p_flat.o 00:03:13.124 SYMLINK libspdk_scsi.so 00:03:13.124 CC lib/ftl/ftl_nv_cache.o 00:03:13.124 LIB libspdk_ublk.a 00:03:13.124 SO libspdk_ublk.so.3.0 00:03:13.382 SYMLINK libspdk_ublk.so 00:03:13.382 CC lib/nvmf/transport.o 00:03:13.382 CC lib/ftl/ftl_band.o 00:03:13.382 CC lib/iscsi/conn.o 00:03:13.639 CC lib/iscsi/init_grp.o 00:03:13.895 CC lib/iscsi/iscsi.o 00:03:13.895 CC lib/nvmf/tcp.o 00:03:13.895 CC lib/nvmf/stubs.o 00:03:14.153 CC lib/nvmf/mdns_server.o 00:03:14.153 CC lib/ftl/ftl_band_ops.o 00:03:14.153 CC lib/iscsi/md5.o 00:03:14.410 CC lib/vhost/vhost.o 00:03:14.410 CC lib/iscsi/param.o 00:03:14.410 CC lib/nvmf/rdma.o 00:03:14.410 CC lib/ftl/ftl_writer.o 00:03:14.410 CC lib/ftl/ftl_rq.o 00:03:14.667 CC lib/vhost/vhost_rpc.o 00:03:14.667 CC lib/vhost/vhost_scsi.o 00:03:14.667 CC lib/vhost/vhost_blk.o 00:03:14.667 CC lib/vhost/rte_vhost_user.o 00:03:14.667 CC lib/ftl/ftl_reloc.o 00:03:14.926 CC lib/iscsi/portal_grp.o 00:03:15.184 CC lib/iscsi/tgt_node.o 00:03:15.184 CC lib/ftl/ftl_l2p_cache.o 00:03:15.184 CC lib/ftl/ftl_p2l.o 00:03:15.184 CC lib/nvmf/auth.o 00:03:15.752 CC lib/iscsi/iscsi_subsystem.o 00:03:15.752 CC lib/iscsi/iscsi_rpc.o 00:03:15.752 CC lib/iscsi/task.o 00:03:15.752 CC lib/ftl/mngt/ftl_mngt.o 00:03:15.752 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:15.752 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:16.011 LIB libspdk_vhost.a 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:16.011 SO libspdk_vhost.so.8.0 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:16.011 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:16.011 LIB libspdk_iscsi.a 00:03:16.270 SYMLINK libspdk_vhost.so 00:03:16.270 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:16.270 SO libspdk_iscsi.so.8.0 00:03:16.270 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:16.270 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:16.270 CC lib/ftl/utils/ftl_conf.o 00:03:16.270 CC lib/ftl/utils/ftl_md.o 00:03:16.270 CC lib/ftl/utils/ftl_mempool.o 00:03:16.270 CC lib/ftl/utils/ftl_bitmap.o 00:03:16.529 CC lib/ftl/utils/ftl_property.o 00:03:16.529 SYMLINK libspdk_iscsi.so 00:03:16.529 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:16.529 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:16.529 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:16.529 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:16.529 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:16.529 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:16.787 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:16.787 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:16.787 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:16.787 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:16.787 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:16.787 CC lib/ftl/base/ftl_base_dev.o 00:03:16.787 CC lib/ftl/base/ftl_base_bdev.o 00:03:16.787 CC lib/ftl/ftl_trace.o 00:03:17.045 LIB libspdk_ftl.a 00:03:17.305 LIB libspdk_nvmf.a 00:03:17.305 SO libspdk_ftl.so.9.0 00:03:17.564 SO libspdk_nvmf.so.19.0 00:03:17.823 SYMLINK libspdk_nvmf.so 00:03:17.823 SYMLINK libspdk_ftl.so 00:03:18.081 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.340 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.340 CC module/sock/posix/posix.o 00:03:18.340 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.340 CC module/blob/bdev/blob_bdev.o 00:03:18.340 CC module/keyring/linux/keyring.o 00:03:18.340 CC module/keyring/file/keyring.o 00:03:18.340 CC module/accel/error/accel_error.o 00:03:18.340 CC module/scheduler/gscheduler/gscheduler.o 00:03:18.340 CC module/accel/ioat/accel_ioat.o 00:03:18.340 LIB libspdk_env_dpdk_rpc.a 00:03:18.340 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.340 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.340 CC module/keyring/file/keyring_rpc.o 00:03:18.340 CC module/keyring/linux/keyring_rpc.o 00:03:18.340 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.340 LIB libspdk_scheduler_gscheduler.a 00:03:18.340 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.599 CC module/accel/error/accel_error_rpc.o 00:03:18.599 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.599 LIB libspdk_scheduler_dynamic.a 00:03:18.599 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.599 CC module/accel/ioat/accel_ioat_rpc.o 00:03:18.599 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.599 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.599 LIB libspdk_keyring_file.a 00:03:18.599 LIB libspdk_keyring_linux.a 00:03:18.599 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.599 LIB libspdk_blob_bdev.a 00:03:18.599 SO libspdk_keyring_file.so.1.0 00:03:18.599 CC module/accel/dsa/accel_dsa.o 00:03:18.599 CC module/accel/dsa/accel_dsa_rpc.o 00:03:18.599 SO libspdk_keyring_linux.so.1.0 00:03:18.599 SO libspdk_blob_bdev.so.11.0 00:03:18.599 LIB libspdk_accel_error.a 00:03:18.599 SYMLINK libspdk_keyring_file.so 00:03:18.599 SYMLINK libspdk_keyring_linux.so 00:03:18.599 LIB libspdk_accel_ioat.a 00:03:18.599 SYMLINK libspdk_blob_bdev.so 00:03:18.599 SO libspdk_accel_error.so.2.0 00:03:18.857 SO libspdk_accel_ioat.so.6.0 00:03:18.857 CC module/accel/iaa/accel_iaa.o 00:03:18.857 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.858 SYMLINK libspdk_accel_error.so 00:03:18.858 SYMLINK libspdk_accel_ioat.so 00:03:18.858 LIB libspdk_accel_dsa.a 00:03:18.858 SO libspdk_accel_dsa.so.5.0 00:03:19.115 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.115 CC module/bdev/error/vbdev_error.o 00:03:19.115 CC module/bdev/gpt/gpt.o 00:03:19.115 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.115 CC module/bdev/delay/vbdev_delay.o 00:03:19.115 CC module/bdev/malloc/bdev_malloc.o 00:03:19.115 SYMLINK libspdk_accel_dsa.so 00:03:19.115 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.115 LIB libspdk_accel_iaa.a 00:03:19.115 CC module/bdev/null/bdev_null.o 00:03:19.115 SO libspdk_accel_iaa.so.3.0 00:03:19.115 SYMLINK libspdk_accel_iaa.so 00:03:19.115 CC module/bdev/null/bdev_null_rpc.o 00:03:19.115 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.115 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:19.374 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.374 LIB libspdk_sock_posix.a 00:03:19.374 SO libspdk_sock_posix.so.6.0 00:03:19.374 LIB libspdk_bdev_error.a 00:03:19.374 SO libspdk_bdev_error.so.6.0 00:03:19.374 LIB libspdk_bdev_null.a 00:03:19.374 LIB libspdk_blobfs_bdev.a 00:03:19.374 SYMLINK libspdk_sock_posix.so 00:03:19.374 SO libspdk_bdev_null.so.6.0 00:03:19.374 SYMLINK libspdk_bdev_error.so 00:03:19.374 SO libspdk_blobfs_bdev.so.6.0 00:03:19.632 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.632 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.632 SYMLINK libspdk_bdev_null.so 00:03:19.632 SYMLINK libspdk_blobfs_bdev.so 00:03:19.632 CC module/bdev/nvme/bdev_nvme.o 00:03:19.632 LIB libspdk_bdev_gpt.a 00:03:19.632 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.632 SO libspdk_bdev_gpt.so.6.0 00:03:19.632 CC module/bdev/raid/bdev_raid.o 00:03:19.632 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.632 LIB libspdk_bdev_lvol.a 00:03:19.632 SYMLINK libspdk_bdev_gpt.so 00:03:19.632 CC module/bdev/split/vbdev_split.o 00:03:19.632 LIB libspdk_bdev_delay.a 00:03:19.632 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:19.632 LIB libspdk_bdev_malloc.a 00:03:19.632 SO libspdk_bdev_lvol.so.6.0 00:03:19.632 SO libspdk_bdev_delay.so.6.0 00:03:19.891 SO libspdk_bdev_malloc.so.6.0 00:03:19.891 SYMLINK libspdk_bdev_lvol.so 00:03:19.891 SYMLINK libspdk_bdev_delay.so 00:03:19.891 CC module/bdev/raid/bdev_raid_rpc.o 00:03:19.891 CC module/bdev/raid/bdev_raid_sb.o 00:03:19.891 SYMLINK libspdk_bdev_malloc.so 00:03:19.891 CC module/bdev/split/vbdev_split_rpc.o 00:03:19.891 CC module/bdev/xnvme/bdev_xnvme.o 00:03:19.891 LIB libspdk_bdev_passthru.a 00:03:19.891 CC module/bdev/raid/raid0.o 00:03:19.891 CC module/bdev/aio/bdev_aio.o 00:03:19.891 SO libspdk_bdev_passthru.so.6.0 00:03:20.151 LIB libspdk_bdev_split.a 00:03:20.151 SO libspdk_bdev_split.so.6.0 00:03:20.151 SYMLINK libspdk_bdev_passthru.so 00:03:20.151 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.151 SYMLINK libspdk_bdev_split.so 00:03:20.151 CC module/bdev/raid/raid1.o 00:03:20.151 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:20.151 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.410 CC module/bdev/ftl/bdev_ftl.o 00:03:20.410 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.410 LIB libspdk_bdev_zone_block.a 00:03:20.410 CC module/bdev/iscsi/bdev_iscsi.o 00:03:20.410 SO libspdk_bdev_zone_block.so.6.0 00:03:20.410 LIB libspdk_bdev_xnvme.a 00:03:20.410 CC module/bdev/raid/concat.o 00:03:20.410 SO libspdk_bdev_xnvme.so.3.0 00:03:20.410 SYMLINK libspdk_bdev_zone_block.so 00:03:20.410 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:20.410 LIB libspdk_bdev_aio.a 00:03:20.410 SYMLINK libspdk_bdev_xnvme.so 00:03:20.410 CC module/bdev/nvme/nvme_rpc.o 00:03:20.410 SO libspdk_bdev_aio.so.6.0 00:03:20.669 SYMLINK libspdk_bdev_aio.so 00:03:20.669 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:20.669 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.669 CC module/bdev/nvme/vbdev_opal.o 00:03:20.669 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.669 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:20.669 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:20.669 LIB libspdk_bdev_iscsi.a 00:03:20.669 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:20.928 SO libspdk_bdev_iscsi.so.6.0 00:03:20.928 LIB libspdk_bdev_ftl.a 00:03:20.928 SO libspdk_bdev_ftl.so.6.0 00:03:20.928 SYMLINK libspdk_bdev_iscsi.so 00:03:20.928 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:20.928 SYMLINK libspdk_bdev_ftl.so 00:03:20.928 LIB libspdk_bdev_raid.a 00:03:21.187 SO libspdk_bdev_raid.so.6.0 00:03:21.187 SYMLINK libspdk_bdev_raid.so 00:03:21.445 LIB libspdk_bdev_virtio.a 00:03:21.445 SO libspdk_bdev_virtio.so.6.0 00:03:21.445 SYMLINK libspdk_bdev_virtio.so 00:03:22.821 LIB libspdk_bdev_nvme.a 00:03:22.821 SO libspdk_bdev_nvme.so.7.0 00:03:22.821 SYMLINK libspdk_bdev_nvme.so 00:03:23.389 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.389 CC module/event/subsystems/sock/sock.o 00:03:23.389 CC module/event/subsystems/vmd/vmd.o 00:03:23.389 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.389 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.389 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.389 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:23.389 CC module/event/subsystems/keyring/keyring.o 00:03:23.389 LIB libspdk_event_scheduler.a 00:03:23.389 LIB libspdk_event_keyring.a 00:03:23.389 LIB libspdk_event_sock.a 00:03:23.652 LIB libspdk_event_iobuf.a 00:03:23.652 SO libspdk_event_scheduler.so.4.0 00:03:23.652 SO libspdk_event_keyring.so.1.0 00:03:23.652 SO libspdk_event_sock.so.5.0 00:03:23.652 LIB libspdk_event_vmd.a 00:03:23.652 LIB libspdk_event_vhost_blk.a 00:03:23.652 SO libspdk_event_iobuf.so.3.0 00:03:23.652 SO libspdk_event_vmd.so.6.0 00:03:23.652 SYMLINK libspdk_event_scheduler.so 00:03:23.652 SO libspdk_event_vhost_blk.so.3.0 00:03:23.652 SYMLINK libspdk_event_keyring.so 00:03:23.652 SYMLINK libspdk_event_sock.so 00:03:23.652 SYMLINK libspdk_event_iobuf.so 00:03:23.652 SYMLINK libspdk_event_vhost_blk.so 00:03:23.652 SYMLINK libspdk_event_vmd.so 00:03:23.912 CC module/event/subsystems/accel/accel.o 00:03:24.170 LIB libspdk_event_accel.a 00:03:24.170 SO libspdk_event_accel.so.6.0 00:03:24.170 SYMLINK libspdk_event_accel.so 00:03:24.429 CC module/event/subsystems/bdev/bdev.o 00:03:24.687 LIB libspdk_event_bdev.a 00:03:24.687 SO libspdk_event_bdev.so.6.0 00:03:24.687 SYMLINK libspdk_event_bdev.so 00:03:24.946 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.946 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.946 CC module/event/subsystems/nbd/nbd.o 00:03:24.946 CC module/event/subsystems/ublk/ublk.o 00:03:24.946 CC module/event/subsystems/scsi/scsi.o 00:03:25.204 LIB libspdk_event_nbd.a 00:03:25.204 LIB libspdk_event_ublk.a 00:03:25.204 LIB libspdk_event_scsi.a 00:03:25.204 SO libspdk_event_nbd.so.6.0 00:03:25.204 SO libspdk_event_ublk.so.3.0 00:03:25.204 SO libspdk_event_scsi.so.6.0 00:03:25.204 SYMLINK libspdk_event_nbd.so 00:03:25.204 SYMLINK libspdk_event_ublk.so 00:03:25.204 LIB libspdk_event_nvmf.a 00:03:25.204 SYMLINK libspdk_event_scsi.so 00:03:25.463 SO libspdk_event_nvmf.so.6.0 00:03:25.463 SYMLINK libspdk_event_nvmf.so 00:03:25.463 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:25.463 CC module/event/subsystems/iscsi/iscsi.o 00:03:25.721 LIB libspdk_event_vhost_scsi.a 00:03:25.721 LIB libspdk_event_iscsi.a 00:03:25.721 SO libspdk_event_iscsi.so.6.0 00:03:25.721 SO libspdk_event_vhost_scsi.so.3.0 00:03:25.721 SYMLINK libspdk_event_iscsi.so 00:03:25.721 SYMLINK libspdk_event_vhost_scsi.so 00:03:25.979 SO libspdk.so.6.0 00:03:25.980 SYMLINK libspdk.so 00:03:26.238 TEST_HEADER include/spdk/accel.h 00:03:26.239 TEST_HEADER include/spdk/accel_module.h 00:03:26.239 TEST_HEADER include/spdk/assert.h 00:03:26.239 CXX app/trace/trace.o 00:03:26.239 TEST_HEADER include/spdk/barrier.h 00:03:26.239 TEST_HEADER include/spdk/base64.h 00:03:26.239 CC test/rpc_client/rpc_client_test.o 00:03:26.239 TEST_HEADER include/spdk/bdev.h 00:03:26.239 TEST_HEADER include/spdk/bdev_module.h 00:03:26.239 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.239 TEST_HEADER include/spdk/bit_array.h 00:03:26.239 TEST_HEADER include/spdk/bit_pool.h 00:03:26.239 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.239 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.239 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:26.239 TEST_HEADER include/spdk/blobfs.h 00:03:26.239 TEST_HEADER include/spdk/blob.h 00:03:26.239 TEST_HEADER include/spdk/conf.h 00:03:26.239 TEST_HEADER include/spdk/config.h 00:03:26.239 TEST_HEADER include/spdk/cpuset.h 00:03:26.239 TEST_HEADER include/spdk/crc16.h 00:03:26.239 TEST_HEADER include/spdk/crc32.h 00:03:26.239 TEST_HEADER include/spdk/crc64.h 00:03:26.239 TEST_HEADER include/spdk/dif.h 00:03:26.239 TEST_HEADER include/spdk/dma.h 00:03:26.239 TEST_HEADER include/spdk/endian.h 00:03:26.239 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.239 TEST_HEADER include/spdk/env.h 00:03:26.239 TEST_HEADER include/spdk/event.h 00:03:26.239 TEST_HEADER include/spdk/fd_group.h 00:03:26.239 TEST_HEADER include/spdk/fd.h 00:03:26.239 TEST_HEADER include/spdk/file.h 00:03:26.239 CC examples/ioat/perf/perf.o 00:03:26.239 TEST_HEADER include/spdk/ftl.h 00:03:26.239 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.239 TEST_HEADER include/spdk/hexlify.h 00:03:26.239 CC test/thread/poller_perf/poller_perf.o 00:03:26.239 CC examples/util/zipf/zipf.o 00:03:26.239 TEST_HEADER include/spdk/histogram_data.h 00:03:26.239 TEST_HEADER include/spdk/idxd.h 00:03:26.239 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.239 TEST_HEADER include/spdk/init.h 00:03:26.239 TEST_HEADER include/spdk/ioat.h 00:03:26.239 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.239 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.498 TEST_HEADER include/spdk/json.h 00:03:26.498 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.498 TEST_HEADER include/spdk/keyring.h 00:03:26.498 TEST_HEADER include/spdk/keyring_module.h 00:03:26.498 TEST_HEADER include/spdk/likely.h 00:03:26.498 TEST_HEADER include/spdk/log.h 00:03:26.498 TEST_HEADER include/spdk/lvol.h 00:03:26.498 TEST_HEADER include/spdk/memory.h 00:03:26.498 TEST_HEADER include/spdk/mmio.h 00:03:26.498 TEST_HEADER include/spdk/nbd.h 00:03:26.498 TEST_HEADER include/spdk/net.h 00:03:26.498 TEST_HEADER include/spdk/notify.h 00:03:26.498 TEST_HEADER include/spdk/nvme.h 00:03:26.498 CC test/dma/test_dma/test_dma.o 00:03:26.498 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.498 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.498 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.498 CC test/app/bdev_svc/bdev_svc.o 00:03:26.498 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.498 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.498 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.498 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.498 TEST_HEADER include/spdk/nvmf.h 00:03:26.498 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.498 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.498 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.498 TEST_HEADER include/spdk/opal.h 00:03:26.498 TEST_HEADER include/spdk/opal_spec.h 00:03:26.498 TEST_HEADER include/spdk/pci_ids.h 00:03:26.498 TEST_HEADER include/spdk/pipe.h 00:03:26.498 TEST_HEADER include/spdk/queue.h 00:03:26.498 TEST_HEADER include/spdk/reduce.h 00:03:26.498 TEST_HEADER include/spdk/rpc.h 00:03:26.498 TEST_HEADER include/spdk/scheduler.h 00:03:26.498 TEST_HEADER include/spdk/scsi.h 00:03:26.498 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.498 TEST_HEADER include/spdk/sock.h 00:03:26.498 TEST_HEADER include/spdk/stdinc.h 00:03:26.498 TEST_HEADER include/spdk/string.h 00:03:26.498 TEST_HEADER include/spdk/thread.h 00:03:26.498 TEST_HEADER include/spdk/trace.h 00:03:26.498 TEST_HEADER include/spdk/trace_parser.h 00:03:26.498 TEST_HEADER include/spdk/tree.h 00:03:26.498 TEST_HEADER include/spdk/ublk.h 00:03:26.498 TEST_HEADER include/spdk/util.h 00:03:26.498 TEST_HEADER include/spdk/uuid.h 00:03:26.498 TEST_HEADER include/spdk/version.h 00:03:26.498 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.498 LINK rpc_client_test 00:03:26.498 LINK poller_perf 00:03:26.498 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.498 TEST_HEADER include/spdk/vhost.h 00:03:26.498 TEST_HEADER include/spdk/vmd.h 00:03:26.498 TEST_HEADER include/spdk/xor.h 00:03:26.498 TEST_HEADER include/spdk/zipf.h 00:03:26.498 CXX test/cpp_headers/accel.o 00:03:26.498 LINK interrupt_tgt 00:03:26.498 LINK zipf 00:03:26.757 LINK ioat_perf 00:03:26.757 LINK bdev_svc 00:03:26.757 LINK spdk_trace 00:03:26.757 CXX test/cpp_headers/accel_module.o 00:03:26.757 CC examples/ioat/verify/verify.o 00:03:26.757 CC test/env/vtophys/vtophys.o 00:03:26.757 LINK test_dma 00:03:26.757 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.016 CC test/app/histogram_perf/histogram_perf.o 00:03:27.016 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.016 CXX test/cpp_headers/assert.o 00:03:27.016 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:27.016 LINK vtophys 00:03:27.016 CC app/trace_record/trace_record.o 00:03:27.016 LINK env_dpdk_post_init 00:03:27.016 LINK verify 00:03:27.016 LINK histogram_perf 00:03:27.016 CXX test/cpp_headers/barrier.o 00:03:27.016 LINK mem_callbacks 00:03:27.275 CC test/event/event_perf/event_perf.o 00:03:27.275 CXX test/cpp_headers/base64.o 00:03:27.275 LINK spdk_trace_record 00:03:27.275 CC test/env/memory/memory_ut.o 00:03:27.275 CC test/nvme/aer/aer.o 00:03:27.534 LINK nvme_fuzz 00:03:27.534 CC examples/thread/thread/thread_ex.o 00:03:27.534 CC test/blobfs/mkfs/mkfs.o 00:03:27.534 CC test/accel/dif/dif.o 00:03:27.534 LINK event_perf 00:03:27.534 CXX test/cpp_headers/bdev.o 00:03:27.534 CC app/nvmf_tgt/nvmf_main.o 00:03:27.793 LINK mkfs 00:03:27.793 CXX test/cpp_headers/bdev_module.o 00:03:27.793 CC test/event/reactor/reactor.o 00:03:27.793 LINK aer 00:03:27.793 LINK thread 00:03:27.793 LINK nvmf_tgt 00:03:27.793 LINK reactor 00:03:27.793 CC test/lvol/esnap/esnap.o 00:03:27.793 CXX test/cpp_headers/bdev_zone.o 00:03:28.053 CC test/nvme/reset/reset.o 00:03:28.053 CC test/nvme/sgl/sgl.o 00:03:28.053 LINK dif 00:03:28.053 CXX test/cpp_headers/bit_array.o 00:03:28.053 CC test/event/reactor_perf/reactor_perf.o 00:03:28.053 CC examples/sock/hello_world/hello_sock.o 00:03:28.053 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.311 LINK reactor_perf 00:03:28.311 CXX test/cpp_headers/bit_pool.o 00:03:28.311 LINK reset 00:03:28.311 LINK sgl 00:03:28.311 CC test/env/pci/pci_ut.o 00:03:28.311 LINK iscsi_tgt 00:03:28.311 LINK hello_sock 00:03:28.568 CXX test/cpp_headers/blob_bdev.o 00:03:28.568 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.568 CC test/event/app_repeat/app_repeat.o 00:03:28.568 CC test/nvme/e2edp/nvme_dp.o 00:03:28.568 LINK memory_ut 00:03:28.568 CXX test/cpp_headers/blobfs.o 00:03:28.568 LINK app_repeat 00:03:28.826 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.826 CC examples/vmd/lsvmd/lsvmd.o 00:03:28.826 CC app/spdk_tgt/spdk_tgt.o 00:03:28.826 LINK pci_ut 00:03:28.826 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.826 CXX test/cpp_headers/blob.o 00:03:28.826 LINK nvme_dp 00:03:28.826 LINK lsvmd 00:03:29.085 LINK spdk_tgt 00:03:29.085 CC test/event/scheduler/scheduler.o 00:03:29.085 CXX test/cpp_headers/conf.o 00:03:29.085 CXX test/cpp_headers/config.o 00:03:29.085 LINK iscsi_fuzz 00:03:29.085 CC test/nvme/overhead/overhead.o 00:03:29.085 CC test/bdev/bdevio/bdevio.o 00:03:29.085 CC examples/vmd/led/led.o 00:03:29.343 CC test/app/jsoncat/jsoncat.o 00:03:29.343 CXX test/cpp_headers/cpuset.o 00:03:29.343 CC app/spdk_lspci/spdk_lspci.o 00:03:29.343 LINK scheduler 00:03:29.343 LINK led 00:03:29.343 LINK vhost_fuzz 00:03:29.343 LINK jsoncat 00:03:29.343 CXX test/cpp_headers/crc16.o 00:03:29.602 LINK spdk_lspci 00:03:29.602 CC app/spdk_nvme_perf/perf.o 00:03:29.602 LINK overhead 00:03:29.602 CXX test/cpp_headers/crc32.o 00:03:29.602 CC test/app/stub/stub.o 00:03:29.602 CC test/nvme/err_injection/err_injection.o 00:03:29.602 LINK bdevio 00:03:29.602 CC test/nvme/startup/startup.o 00:03:29.602 CC examples/idxd/perf/perf.o 00:03:29.861 CXX test/cpp_headers/crc64.o 00:03:29.861 CC test/nvme/reserve/reserve.o 00:03:29.861 CC examples/accel/perf/accel_perf.o 00:03:29.861 LINK stub 00:03:29.861 LINK err_injection 00:03:29.861 CXX test/cpp_headers/dif.o 00:03:29.861 LINK startup 00:03:29.861 CXX test/cpp_headers/dma.o 00:03:30.120 CXX test/cpp_headers/endian.o 00:03:30.120 LINK reserve 00:03:30.120 CC test/nvme/simple_copy/simple_copy.o 00:03:30.120 CXX test/cpp_headers/env_dpdk.o 00:03:30.120 LINK idxd_perf 00:03:30.120 CC test/nvme/connect_stress/connect_stress.o 00:03:30.120 CC test/nvme/boot_partition/boot_partition.o 00:03:30.120 CXX test/cpp_headers/env.o 00:03:30.379 CC test/nvme/compliance/nvme_compliance.o 00:03:30.379 LINK connect_stress 00:03:30.379 LINK boot_partition 00:03:30.379 CC app/spdk_nvme_identify/identify.o 00:03:30.379 CXX test/cpp_headers/event.o 00:03:30.379 LINK simple_copy 00:03:30.379 LINK accel_perf 00:03:30.379 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.638 LINK spdk_nvme_perf 00:03:30.638 CXX test/cpp_headers/fd_group.o 00:03:30.638 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.638 CC test/nvme/fdp/fdp.o 00:03:30.638 CC test/nvme/cuse/cuse.o 00:03:30.638 LINK fused_ordering 00:03:30.638 LINK nvme_compliance 00:03:30.638 CXX test/cpp_headers/fd.o 00:03:30.897 CC app/spdk_nvme_discover/discovery_aer.o 00:03:30.897 CC examples/blob/hello_world/hello_blob.o 00:03:30.897 LINK doorbell_aers 00:03:30.897 CXX test/cpp_headers/file.o 00:03:30.897 CC examples/blob/cli/blobcli.o 00:03:30.897 LINK spdk_nvme_discover 00:03:31.156 LINK fdp 00:03:31.156 CC examples/nvme/hello_world/hello_world.o 00:03:31.156 LINK hello_blob 00:03:31.156 CXX test/cpp_headers/ftl.o 00:03:31.156 CC app/spdk_top/spdk_top.o 00:03:31.414 CXX test/cpp_headers/gpt_spec.o 00:03:31.414 LINK hello_world 00:03:31.414 CC app/vhost/vhost.o 00:03:31.414 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.414 LINK spdk_nvme_identify 00:03:31.414 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.414 CXX test/cpp_headers/hexlify.o 00:03:31.673 LINK vhost 00:03:31.673 LINK blobcli 00:03:31.673 CC examples/nvme/reconnect/reconnect.o 00:03:31.673 LINK hello_bdev 00:03:31.673 CXX test/cpp_headers/histogram_data.o 00:03:31.673 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:31.673 CXX test/cpp_headers/idxd.o 00:03:31.673 CXX test/cpp_headers/idxd_spec.o 00:03:31.932 CXX test/cpp_headers/init.o 00:03:31.932 CXX test/cpp_headers/ioat.o 00:03:31.932 CXX test/cpp_headers/ioat_spec.o 00:03:31.932 CXX test/cpp_headers/iscsi_spec.o 00:03:31.932 CXX test/cpp_headers/json.o 00:03:31.932 LINK reconnect 00:03:32.190 CXX test/cpp_headers/jsonrpc.o 00:03:32.190 CC app/spdk_dd/spdk_dd.o 00:03:32.190 LINK cuse 00:03:32.190 LINK spdk_top 00:03:32.190 CC examples/nvme/arbitration/arbitration.o 00:03:32.190 CC examples/nvme/hotplug/hotplug.o 00:03:32.190 LINK nvme_manage 00:03:32.449 CXX test/cpp_headers/keyring.o 00:03:32.449 CC app/fio/nvme/fio_plugin.o 00:03:32.449 LINK bdevperf 00:03:32.449 CXX test/cpp_headers/keyring_module.o 00:03:32.449 CXX test/cpp_headers/likely.o 00:03:32.449 CXX test/cpp_headers/log.o 00:03:32.449 CC app/fio/bdev/fio_plugin.o 00:03:32.449 LINK hotplug 00:03:32.449 LINK spdk_dd 00:03:32.706 LINK arbitration 00:03:32.706 CXX test/cpp_headers/lvol.o 00:03:32.706 CXX test/cpp_headers/memory.o 00:03:32.706 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:32.706 CC examples/nvme/abort/abort.o 00:03:32.706 CXX test/cpp_headers/mmio.o 00:03:32.706 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:32.706 CXX test/cpp_headers/nbd.o 00:03:32.706 CXX test/cpp_headers/net.o 00:03:32.964 CXX test/cpp_headers/notify.o 00:03:32.964 CXX test/cpp_headers/nvme.o 00:03:32.964 LINK cmb_copy 00:03:32.964 CXX test/cpp_headers/nvme_intel.o 00:03:32.964 LINK pmr_persistence 00:03:32.965 CXX test/cpp_headers/nvme_ocssd.o 00:03:32.965 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:32.965 LINK spdk_nvme 00:03:32.965 CXX test/cpp_headers/nvme_spec.o 00:03:33.224 CXX test/cpp_headers/nvme_zns.o 00:03:33.224 CXX test/cpp_headers/nvmf_cmd.o 00:03:33.224 LINK spdk_bdev 00:03:33.224 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:33.224 CXX test/cpp_headers/nvmf.o 00:03:33.224 LINK abort 00:03:33.224 CXX test/cpp_headers/nvmf_spec.o 00:03:33.224 CXX test/cpp_headers/nvmf_transport.o 00:03:33.224 CXX test/cpp_headers/opal.o 00:03:33.224 CXX test/cpp_headers/opal_spec.o 00:03:33.224 CXX test/cpp_headers/pci_ids.o 00:03:33.224 CXX test/cpp_headers/pipe.o 00:03:33.224 CXX test/cpp_headers/queue.o 00:03:33.224 CXX test/cpp_headers/reduce.o 00:03:33.483 CXX test/cpp_headers/rpc.o 00:03:33.483 CXX test/cpp_headers/scheduler.o 00:03:33.483 CXX test/cpp_headers/scsi.o 00:03:33.483 CXX test/cpp_headers/scsi_spec.o 00:03:33.483 CXX test/cpp_headers/sock.o 00:03:33.483 CXX test/cpp_headers/stdinc.o 00:03:33.483 CXX test/cpp_headers/string.o 00:03:33.483 CXX test/cpp_headers/thread.o 00:03:33.483 CXX test/cpp_headers/trace.o 00:03:33.483 CC examples/nvmf/nvmf/nvmf.o 00:03:33.483 CXX test/cpp_headers/trace_parser.o 00:03:33.740 CXX test/cpp_headers/tree.o 00:03:33.740 CXX test/cpp_headers/ublk.o 00:03:33.740 CXX test/cpp_headers/util.o 00:03:33.740 CXX test/cpp_headers/uuid.o 00:03:33.740 CXX test/cpp_headers/version.o 00:03:33.740 CXX test/cpp_headers/vfio_user_pci.o 00:03:33.740 CXX test/cpp_headers/vfio_user_spec.o 00:03:33.740 CXX test/cpp_headers/vhost.o 00:03:33.740 CXX test/cpp_headers/vmd.o 00:03:33.740 CXX test/cpp_headers/xor.o 00:03:33.740 CXX test/cpp_headers/zipf.o 00:03:33.998 LINK nvmf 00:03:34.565 LINK esnap 00:03:35.133 00:03:35.133 real 1m15.621s 00:03:35.133 user 7m36.790s 00:03:35.133 sys 1m33.951s 00:03:35.133 14:08:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:35.133 14:08:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:35.133 ************************************ 00:03:35.133 END TEST make 00:03:35.133 ************************************ 00:03:35.133 14:08:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:35.133 14:08:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:35.133 14:08:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:35.133 14:08:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.133 14:08:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:35.133 14:08:54 -- pm/common@44 -- $ pid=5239 00:03:35.133 14:08:54 -- pm/common@50 -- $ kill -TERM 5239 00:03:35.133 14:08:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.133 14:08:54 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:35.133 14:08:54 -- pm/common@44 -- $ pid=5240 00:03:35.133 14:08:54 -- pm/common@50 -- $ kill -TERM 5240 00:03:35.133 14:08:54 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:35.133 14:08:54 -- nvmf/common.sh@7 -- # uname -s 00:03:35.133 14:08:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:35.133 14:08:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:35.133 14:08:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:35.133 14:08:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:35.133 14:08:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:35.133 14:08:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:35.133 14:08:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:35.133 14:08:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:35.133 14:08:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:35.133 14:08:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:35.133 14:08:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:03:35.133 14:08:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:03:35.133 14:08:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:35.133 14:08:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:35.133 14:08:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:35.133 14:08:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:35.133 14:08:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:35.133 14:08:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:35.133 14:08:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:35.133 14:08:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:35.133 14:08:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.134 14:08:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.134 14:08:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.134 14:08:54 -- paths/export.sh@5 -- # export PATH 00:03:35.134 14:08:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:35.134 14:08:54 -- nvmf/common.sh@47 -- # : 0 00:03:35.134 14:08:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:35.134 14:08:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:35.134 14:08:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:35.134 14:08:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:35.134 14:08:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:35.134 14:08:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:35.134 14:08:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:35.134 14:08:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:35.134 14:08:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:35.134 14:08:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:35.134 14:08:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:35.134 14:08:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:35.134 14:08:54 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:35.134 14:08:54 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:35.134 14:08:54 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:35.134 14:08:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:35.134 14:08:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:35.134 14:08:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:35.134 14:08:54 -- spdk/autotest.sh@48 -- # udevadm_pid=53765 00:03:35.134 14:08:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:35.134 14:08:54 -- pm/common@17 -- # local monitor 00:03:35.134 14:08:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:35.134 14:08:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.134 14:08:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:35.134 14:08:54 -- pm/common@25 -- # sleep 1 00:03:35.134 14:08:54 -- pm/common@21 -- # date +%s 00:03:35.134 14:08:54 -- pm/common@21 -- # date +%s 00:03:35.134 14:08:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1722002934 00:03:35.134 14:08:54 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1722002934 00:03:35.134 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1722002934_collect-vmstat.pm.log 00:03:35.134 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1722002934_collect-cpu-load.pm.log 00:03:36.509 14:08:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:36.509 14:08:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:36.509 14:08:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:36.509 14:08:55 -- common/autotest_common.sh@10 -- # set +x 00:03:36.509 14:08:55 -- spdk/autotest.sh@59 -- # create_test_list 00:03:36.509 14:08:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:36.509 14:08:55 -- common/autotest_common.sh@10 -- # set +x 00:03:36.509 14:08:55 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:36.509 14:08:55 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:36.509 14:08:55 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:36.509 14:08:55 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:36.509 14:08:55 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:36.509 14:08:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:36.509 14:08:55 -- common/autotest_common.sh@1455 -- # uname 00:03:36.509 14:08:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:36.510 14:08:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:36.510 14:08:55 -- common/autotest_common.sh@1475 -- # uname 00:03:36.510 14:08:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:36.510 14:08:55 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:36.510 14:08:55 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:36.510 14:08:55 -- spdk/autotest.sh@72 -- # hash lcov 00:03:36.510 14:08:55 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:36.510 14:08:55 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:36.510 --rc lcov_branch_coverage=1 00:03:36.510 --rc lcov_function_coverage=1 00:03:36.510 --rc genhtml_branch_coverage=1 00:03:36.510 --rc genhtml_function_coverage=1 00:03:36.510 --rc genhtml_legend=1 00:03:36.510 --rc geninfo_all_blocks=1 00:03:36.510 ' 00:03:36.510 14:08:55 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:36.510 --rc lcov_branch_coverage=1 00:03:36.510 --rc lcov_function_coverage=1 00:03:36.510 --rc genhtml_branch_coverage=1 00:03:36.510 --rc genhtml_function_coverage=1 00:03:36.510 --rc genhtml_legend=1 00:03:36.510 --rc geninfo_all_blocks=1 00:03:36.510 ' 00:03:36.510 14:08:55 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:36.510 --rc lcov_branch_coverage=1 00:03:36.510 --rc lcov_function_coverage=1 00:03:36.510 --rc genhtml_branch_coverage=1 00:03:36.510 --rc genhtml_function_coverage=1 00:03:36.510 --rc genhtml_legend=1 00:03:36.510 --rc geninfo_all_blocks=1 00:03:36.510 --no-external' 00:03:36.510 14:08:55 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:36.510 --rc lcov_branch_coverage=1 00:03:36.510 --rc lcov_function_coverage=1 00:03:36.510 --rc genhtml_branch_coverage=1 00:03:36.510 --rc genhtml_function_coverage=1 00:03:36.510 --rc genhtml_legend=1 00:03:36.510 --rc geninfo_all_blocks=1 00:03:36.510 --no-external' 00:03:36.510 14:08:55 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:36.510 lcov: LCOV version 1.14 00:03:36.510 14:08:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:51.417 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:51.417 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:59.536 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.536 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.796 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.797 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.797 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.797 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:59.797 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.797 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:00.056 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:00.056 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:03.342 14:09:22 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:03.342 14:09:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:03.342 14:09:22 -- common/autotest_common.sh@10 -- # set +x 00:04:03.342 14:09:22 -- spdk/autotest.sh@91 -- # rm -f 00:04:03.342 14:09:22 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:03.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.168 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:04.168 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:04.168 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:04.168 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:04.168 14:09:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:04.168 14:09:23 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:04.168 14:09:23 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:04.168 14:09:23 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:04.168 14:09:23 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:04.168 14:09:23 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:04.168 14:09:23 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:04.168 14:09:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:04.168 14:09:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.168 14:09:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.168 14:09:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:04.168 14:09:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:04.168 14:09:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.427 No valid GPT data, bailing 00:04:04.427 14:09:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.427 14:09:23 -- scripts/common.sh@391 -- # pt= 00:04:04.427 14:09:23 -- scripts/common.sh@392 -- # return 1 00:04:04.427 14:09:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.427 1+0 records in 00:04:04.427 1+0 records out 00:04:04.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125009 s, 83.9 MB/s 00:04:04.427 14:09:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.427 14:09:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.427 14:09:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:04.427 14:09:23 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:04.427 14:09:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:04.427 No valid GPT data, bailing 00:04:04.427 14:09:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:04.427 14:09:24 -- scripts/common.sh@391 -- # pt= 00:04:04.427 14:09:24 -- scripts/common.sh@392 -- # return 1 00:04:04.427 14:09:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:04.427 1+0 records in 00:04:04.427 1+0 records out 00:04:04.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335832 s, 312 MB/s 00:04:04.427 14:09:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.427 14:09:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.427 14:09:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:04.427 14:09:24 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:04.427 14:09:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:04.427 No valid GPT data, bailing 00:04:04.427 14:09:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:04.427 14:09:24 -- scripts/common.sh@391 -- # pt= 00:04:04.427 14:09:24 -- scripts/common.sh@392 -- # return 1 00:04:04.427 14:09:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:04.427 1+0 records in 00:04:04.427 1+0 records out 00:04:04.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450029 s, 233 MB/s 00:04:04.427 14:09:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.427 14:09:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.427 14:09:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:04.427 14:09:24 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:04.427 14:09:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:04.687 No valid GPT data, bailing 00:04:04.687 14:09:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:04.687 14:09:24 -- scripts/common.sh@391 -- # pt= 00:04:04.687 14:09:24 -- scripts/common.sh@392 -- # return 1 00:04:04.687 14:09:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:04.687 1+0 records in 00:04:04.687 1+0 records out 00:04:04.687 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459533 s, 228 MB/s 00:04:04.687 14:09:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.687 14:09:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.687 14:09:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:04.687 14:09:24 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:04.687 14:09:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:04.687 No valid GPT data, bailing 00:04:04.687 14:09:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:04.688 14:09:24 -- scripts/common.sh@391 -- # pt= 00:04:04.688 14:09:24 -- scripts/common.sh@392 -- # return 1 00:04:04.688 14:09:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:04.688 1+0 records in 00:04:04.688 1+0 records out 00:04:04.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041155 s, 255 MB/s 00:04:04.688 14:09:24 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.688 14:09:24 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:04.688 14:09:24 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:04.688 14:09:24 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:04.688 14:09:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:04.688 No valid GPT data, bailing 00:04:04.688 14:09:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:04.688 14:09:24 -- scripts/common.sh@391 -- # pt= 00:04:04.688 14:09:24 -- scripts/common.sh@392 -- # return 1 00:04:04.688 14:09:24 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:04.688 1+0 records in 00:04:04.688 1+0 records out 00:04:04.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418176 s, 251 MB/s 00:04:04.688 14:09:24 -- spdk/autotest.sh@118 -- # sync 00:04:05.255 14:09:24 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:05.255 14:09:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:05.255 14:09:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:07.160 14:09:26 -- spdk/autotest.sh@124 -- # uname -s 00:04:07.160 14:09:26 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:07.160 14:09:26 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:07.160 14:09:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.160 14:09:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.160 14:09:26 -- common/autotest_common.sh@10 -- # set +x 00:04:07.160 ************************************ 00:04:07.160 START TEST setup.sh 00:04:07.160 ************************************ 00:04:07.160 14:09:26 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:07.160 * Looking for test storage... 00:04:07.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.160 14:09:26 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:07.160 14:09:26 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:07.160 14:09:26 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:07.160 14:09:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.160 14:09:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.160 14:09:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.160 ************************************ 00:04:07.160 START TEST acl 00:04:07.160 ************************************ 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:07.160 * Looking for test storage... 00:04:07.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:07.160 14:09:26 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:07.160 14:09:26 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:07.160 14:09:26 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.160 14:09:26 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.104 14:09:27 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:08.104 14:09:27 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:08.104 14:09:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.104 14:09:27 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:08.104 14:09:27 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.104 14:09:27 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.672 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:08.672 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.672 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.931 Hugepages 00:04:08.931 node hugesize free / total 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:08.931 00:04:08.931 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:08.931 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.190 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.449 14:09:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:09.449 14:09:29 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:09.449 14:09:29 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.449 14:09:29 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.449 14:09:29 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:09.449 ************************************ 00:04:09.449 START TEST denied 00:04:09.449 ************************************ 00:04:09.449 14:09:29 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:09.449 14:09:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:09.449 14:09:29 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:09.449 14:09:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.449 14:09:29 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:09.449 14:09:29 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:10.826 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.826 14:09:30 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.393 00:04:17.393 real 0m7.142s 00:04:17.393 user 0m0.826s 00:04:17.393 sys 0m1.335s 00:04:17.393 14:09:36 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:17.393 14:09:36 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 END TEST denied 00:04:17.393 ************************************ 00:04:17.393 14:09:36 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:17.393 14:09:36 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:17.393 14:09:36 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:17.393 14:09:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:17.393 ************************************ 00:04:17.393 START TEST allowed 00:04:17.393 ************************************ 00:04:17.393 14:09:36 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:17.393 14:09:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:17.393 14:09:36 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:17.393 14:09:36 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:17.393 14:09:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.393 14:09:36 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.652 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.652 14:09:37 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.031 00:04:19.031 real 0m2.164s 00:04:19.031 user 0m0.961s 00:04:19.031 sys 0m1.182s 00:04:19.031 14:09:38 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.031 ************************************ 00:04:19.031 END TEST allowed 00:04:19.031 ************************************ 00:04:19.031 14:09:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 ************************************ 00:04:19.031 END TEST acl 00:04:19.031 ************************************ 00:04:19.031 00:04:19.031 real 0m11.832s 00:04:19.031 user 0m2.976s 00:04:19.031 sys 0m3.842s 00:04:19.031 14:09:38 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.031 14:09:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 14:09:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:19.031 14:09:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.031 14:09:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.031 14:09:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.031 ************************************ 00:04:19.031 START TEST hugepages 00:04:19.031 ************************************ 00:04:19.031 14:09:38 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:19.031 * Looking for test storage... 00:04:19.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5825004 kB' 'MemAvailable: 7425772 kB' 'Buffers: 2436 kB' 'Cached: 1814012 kB' 'SwapCached: 0 kB' 'Active: 444492 kB' 'Inactive: 1473936 kB' 'Active(anon): 112492 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 103328 kB' 'Mapped: 48528 kB' 'Shmem: 10512 kB' 'KReclaimable: 63540 kB' 'Slab: 136652 kB' 'SReclaimable: 63540 kB' 'SUnreclaim: 73112 kB' 'KernelStack: 6332 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 326324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.031 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.032 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:19.033 14:09:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:19.033 14:09:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.033 14:09:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.033 14:09:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.033 ************************************ 00:04:19.033 START TEST default_setup 00:04:19.033 ************************************ 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.033 14:09:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.170 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.170 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.170 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.170 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913100 kB' 'MemAvailable: 9513612 kB' 'Buffers: 2436 kB' 'Cached: 1813996 kB' 'SwapCached: 0 kB' 'Active: 462304 kB' 'Inactive: 1473948 kB' 'Active(anon): 130304 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121444 kB' 'Mapped: 48796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135908 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72900 kB' 'KernelStack: 6384 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.170 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.171 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913100 kB' 'MemAvailable: 9513624 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461812 kB' 'Inactive: 1473960 kB' 'Active(anon): 129812 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473960 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120928 kB' 'Mapped: 48596 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135836 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72828 kB' 'KernelStack: 6336 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.172 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.434 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.435 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913100 kB' 'MemAvailable: 9513628 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461876 kB' 'Inactive: 1473964 kB' 'Active(anon): 129876 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121032 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135824 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72816 kB' 'KernelStack: 6352 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.436 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:39 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:20.437 nr_hugepages=1024 00:04:20.437 resv_hugepages=0 00:04:20.437 surplus_hugepages=0 00:04:20.437 anon_hugepages=0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.437 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913100 kB' 'MemAvailable: 9513628 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461904 kB' 'Inactive: 1473964 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135824 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72816 kB' 'KernelStack: 6352 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.438 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.439 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7913448 kB' 'MemUsed: 4328524 kB' 'SwapCached: 0 kB' 'Active: 461800 kB' 'Inactive: 1473964 kB' 'Active(anon): 129800 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 1816436 kB' 'Mapped: 48536 kB' 'AnonPages: 120920 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63008 kB' 'Slab: 135824 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.440 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:20.441 node0=1024 expecting 1024 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:20.441 00:04:20.441 real 0m1.411s 00:04:20.441 user 0m0.649s 00:04:20.441 sys 0m0.707s 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.441 ************************************ 00:04:20.441 END TEST default_setup 00:04:20.441 ************************************ 00:04:20.441 14:09:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:20.441 14:09:40 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:20.441 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.441 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.441 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.441 ************************************ 00:04:20.441 START TEST per_node_1G_alloc 00:04:20.441 ************************************ 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:20.441 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:20.442 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.442 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.012 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.012 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.012 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.012 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958188 kB' 'MemAvailable: 10558716 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 462196 kB' 'Inactive: 1473964 kB' 'Active(anon): 130196 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121288 kB' 'Mapped: 48736 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135832 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72824 kB' 'KernelStack: 6320 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958188 kB' 'MemAvailable: 10558716 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461928 kB' 'Inactive: 1473964 kB' 'Active(anon): 129928 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121056 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135836 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72828 kB' 'KernelStack: 6352 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.013 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.015 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958188 kB' 'MemAvailable: 10558716 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461916 kB' 'Inactive: 1473964 kB' 'Active(anon): 129916 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135836 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72828 kB' 'KernelStack: 6352 kB' 'PageTables: 4156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.016 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.017 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.278 nr_hugepages=512 00:04:21.278 resv_hugepages=0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.278 surplus_hugepages=0 00:04:21.278 anon_hugepages=0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.278 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958844 kB' 'MemAvailable: 10559368 kB' 'Buffers: 2436 kB' 'Cached: 1813996 kB' 'SwapCached: 0 kB' 'Active: 461732 kB' 'Inactive: 1473960 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473960 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'AnonPages: 121200 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135828 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72820 kB' 'KernelStack: 6336 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.279 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.280 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8958844 kB' 'MemUsed: 3283128 kB' 'SwapCached: 0 kB' 'Active: 461940 kB' 'Inactive: 1473964 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 276 kB' 'Writeback: 0 kB' 'FilePages: 1816436 kB' 'Mapped: 48536 kB' 'AnonPages: 121072 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63008 kB' 'Slab: 135828 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72820 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.281 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:21.282 node0=512 expecting 512 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:21.282 00:04:21.282 real 0m0.739s 00:04:21.282 user 0m0.335s 00:04:21.282 sys 0m0.409s 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.282 14:09:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:21.282 ************************************ 00:04:21.282 END TEST per_node_1G_alloc 00:04:21.282 ************************************ 00:04:21.282 14:09:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:21.282 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.282 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.282 14:09:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.282 ************************************ 00:04:21.282 START TEST even_2G_alloc 00:04:21.282 ************************************ 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.282 14:09:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.802 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.802 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.802 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.802 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908852 kB' 'MemAvailable: 9509380 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 462448 kB' 'Inactive: 1473964 kB' 'Active(anon): 130448 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121596 kB' 'Mapped: 48732 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135840 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72832 kB' 'KernelStack: 6384 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.802 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.803 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908852 kB' 'MemAvailable: 9509380 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461900 kB' 'Inactive: 1473964 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121048 kB' 'Mapped: 48732 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135840 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72832 kB' 'KernelStack: 6336 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.804 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.805 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908600 kB' 'MemAvailable: 9509128 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461940 kB' 'Inactive: 1473964 kB' 'Active(anon): 129940 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121040 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135840 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72832 kB' 'KernelStack: 6336 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.806 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.807 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.808 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:21.809 nr_hugepages=1024 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.809 resv_hugepages=0 00:04:21.809 surplus_hugepages=0 00:04:21.809 anon_hugepages=0 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908600 kB' 'MemAvailable: 9509128 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461936 kB' 'Inactive: 1473964 kB' 'Active(anon): 129936 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'AnonPages: 121056 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63008 kB' 'Slab: 135840 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72832 kB' 'KernelStack: 6336 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.809 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.069 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.070 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7908600 kB' 'MemUsed: 4333372 kB' 'SwapCached: 0 kB' 'Active: 461928 kB' 'Inactive: 1473964 kB' 'Active(anon): 129928 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 288 kB' 'Writeback: 0 kB' 'FilePages: 1816436 kB' 'Mapped: 48536 kB' 'AnonPages: 121040 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63008 kB' 'Slab: 135836 kB' 'SReclaimable: 63008 kB' 'SUnreclaim: 72828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.071 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.072 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.073 node0=1024 expecting 1024 00:04:22.073 ************************************ 00:04:22.073 END TEST even_2G_alloc 00:04:22.073 ************************************ 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.073 00:04:22.073 real 0m0.711s 00:04:22.073 user 0m0.332s 00:04:22.073 sys 0m0.398s 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.073 14:09:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.073 14:09:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:22.073 14:09:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.073 14:09:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.073 14:09:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.073 ************************************ 00:04:22.073 START TEST odd_alloc 00:04:22.073 ************************************ 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.073 14:09:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.594 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.594 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.594 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.594 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902332 kB' 'MemAvailable: 9502860 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 462204 kB' 'Inactive: 1473964 kB' 'Active(anon): 130204 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121304 kB' 'Mapped: 48768 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135924 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6360 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.594 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.595 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902332 kB' 'MemAvailable: 9502860 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 462076 kB' 'Inactive: 1473964 kB' 'Active(anon): 130076 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121180 kB' 'Mapped: 48528 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135924 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6320 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.596 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.597 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902340 kB' 'MemAvailable: 9502868 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 461988 kB' 'Inactive: 1473964 kB' 'Active(anon): 129988 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121092 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135932 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72928 kB' 'KernelStack: 6352 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.598 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.599 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:22.600 nr_hugepages=1025 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:22.600 resv_hugepages=0 00:04:22.600 surplus_hugepages=0 00:04:22.600 anon_hugepages=0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.600 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902340 kB' 'MemAvailable: 9502868 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 462008 kB' 'Inactive: 1473964 kB' 'Active(anon): 130008 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'AnonPages: 121084 kB' 'Mapped: 48536 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135928 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72924 kB' 'KernelStack: 6352 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.601 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.602 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.862 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7902340 kB' 'MemUsed: 4339632 kB' 'SwapCached: 0 kB' 'Active: 461700 kB' 'Inactive: 1473964 kB' 'Active(anon): 129700 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 192 kB' 'Writeback: 0 kB' 'FilePages: 1816436 kB' 'Mapped: 48536 kB' 'AnonPages: 120924 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135920 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.863 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.864 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.865 node0=1025 expecting 1025 00:04:22.865 ************************************ 00:04:22.865 END TEST odd_alloc 00:04:22.865 ************************************ 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:22.865 00:04:22.865 real 0m0.725s 00:04:22.865 user 0m0.344s 00:04:22.865 sys 0m0.399s 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.865 14:09:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:22.865 14:09:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:22.865 14:09:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.865 14:09:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.865 14:09:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.865 ************************************ 00:04:22.865 START TEST custom_alloc 00:04:22.865 ************************************ 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.865 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.388 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.388 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.388 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.388 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.388 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951944 kB' 'MemAvailable: 10552476 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 462084 kB' 'Inactive: 1473968 kB' 'Active(anon): 130084 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121212 kB' 'Mapped: 48648 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135872 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72868 kB' 'KernelStack: 6424 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.389 14:09:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951944 kB' 'MemAvailable: 10552476 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 462180 kB' 'Inactive: 1473968 kB' 'Active(anon): 130180 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 121284 kB' 'Mapped: 48600 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135888 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72884 kB' 'KernelStack: 6376 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.390 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.391 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951696 kB' 'MemAvailable: 10552228 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 461968 kB' 'Inactive: 1473968 kB' 'Active(anon): 129968 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48544 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135928 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72924 kB' 'KernelStack: 6352 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.392 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.393 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.394 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.394 nr_hugepages=512 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:23.395 resv_hugepages=0 00:04:23.395 surplus_hugepages=0 00:04:23.395 anon_hugepages=0 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951444 kB' 'MemAvailable: 10551976 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 461904 kB' 'Inactive: 1473968 kB' 'Active(anon): 129904 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 121004 kB' 'Mapped: 48544 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135920 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6336 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.395 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.396 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:23.397 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8951444 kB' 'MemUsed: 3290528 kB' 'SwapCached: 0 kB' 'Active: 461960 kB' 'Inactive: 1473968 kB' 'Active(anon): 129960 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 1816440 kB' 'Mapped: 48544 kB' 'AnonPages: 121108 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135920 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72916 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.664 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.665 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.666 node0=512 expecting 512 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:23.666 00:04:23.666 real 0m0.727s 00:04:23.666 user 0m0.361s 00:04:23.666 sys 0m0.379s 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:23.666 ************************************ 00:04:23.666 END TEST custom_alloc 00:04:23.666 ************************************ 00:04:23.666 14:09:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:23.666 14:09:43 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:23.666 14:09:43 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:23.666 14:09:43 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:23.666 14:09:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.666 ************************************ 00:04:23.666 START TEST no_shrink_alloc 00:04:23.666 ************************************ 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.666 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.194 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.194 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.194 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.194 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899312 kB' 'MemAvailable: 9499844 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459816 kB' 'Inactive: 1473968 kB' 'Active(anon): 127816 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118688 kB' 'Mapped: 48164 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135764 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72760 kB' 'KernelStack: 6344 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.194 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.195 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899312 kB' 'MemAvailable: 9499844 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459180 kB' 'Inactive: 1473968 kB' 'Active(anon): 127180 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118512 kB' 'Mapped: 48036 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135692 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72688 kB' 'KernelStack: 6288 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.196 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.197 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899312 kB' 'MemAvailable: 9499840 kB' 'Buffers: 2436 kB' 'Cached: 1814000 kB' 'SwapCached: 0 kB' 'Active: 459772 kB' 'Inactive: 1473964 kB' 'Active(anon): 127772 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473964 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118900 kB' 'Mapped: 47916 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135620 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72616 kB' 'KernelStack: 6288 kB' 'PageTables: 3840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.198 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.199 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.200 nr_hugepages=1024 00:04:24.200 resv_hugepages=0 00:04:24.200 surplus_hugepages=0 00:04:24.200 anon_hugepages=0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.200 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899312 kB' 'MemAvailable: 9499844 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 458968 kB' 'Inactive: 1473968 kB' 'Active(anon): 126968 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 47796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135616 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72612 kB' 'KernelStack: 6272 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.201 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.202 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7899312 kB' 'MemUsed: 4342660 kB' 'SwapCached: 0 kB' 'Active: 459252 kB' 'Inactive: 1473968 kB' 'Active(anon): 127252 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 1816440 kB' 'Mapped: 47796 kB' 'AnonPages: 118408 kB' 'Shmem: 10472 kB' 'KernelStack: 6288 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135612 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.203 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.204 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.463 node0=1024 expecting 1024 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.463 14:09:43 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.722 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.722 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.722 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.722 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.722 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.722 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7900424 kB' 'MemAvailable: 9500956 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459852 kB' 'Inactive: 1473968 kB' 'Active(anon): 127852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118980 kB' 'Mapped: 47808 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135612 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72608 kB' 'KernelStack: 6408 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.723 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.987 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7903924 kB' 'MemAvailable: 9504456 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459300 kB' 'Inactive: 1473968 kB' 'Active(anon): 127300 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118480 kB' 'Mapped: 47796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135556 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72552 kB' 'KernelStack: 6288 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.988 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.989 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7904176 kB' 'MemAvailable: 9504708 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459296 kB' 'Inactive: 1473968 kB' 'Active(anon): 127296 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118420 kB' 'Mapped: 47796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135556 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72552 kB' 'KernelStack: 6272 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.990 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.991 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.992 nr_hugepages=1024 00:04:24.992 resv_hugepages=0 00:04:24.992 surplus_hugepages=0 00:04:24.992 anon_hugepages=0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.992 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7904176 kB' 'MemAvailable: 9504708 kB' 'Buffers: 2436 kB' 'Cached: 1814004 kB' 'SwapCached: 0 kB' 'Active: 459060 kB' 'Inactive: 1473968 kB' 'Active(anon): 127060 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118472 kB' 'Mapped: 47796 kB' 'Shmem: 10472 kB' 'KReclaimable: 63004 kB' 'Slab: 135556 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72552 kB' 'KernelStack: 6288 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 171884 kB' 'DirectMap2M: 5070848 kB' 'DirectMap1G: 9437184 kB' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.993 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7904176 kB' 'MemUsed: 4337796 kB' 'SwapCached: 0 kB' 'Active: 459360 kB' 'Inactive: 1473968 kB' 'Active(anon): 127360 kB' 'Inactive(anon): 0 kB' 'Active(file): 332000 kB' 'Inactive(file): 1473968 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1816440 kB' 'Mapped: 47796 kB' 'AnonPages: 118516 kB' 'Shmem: 10472 kB' 'KernelStack: 6304 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63004 kB' 'Slab: 135552 kB' 'SReclaimable: 63004 kB' 'SUnreclaim: 72548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.994 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.995 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.996 node0=1024 expecting 1024 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:24.996 ************************************ 00:04:24.996 END TEST no_shrink_alloc 00:04:24.996 ************************************ 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.996 00:04:24.996 real 0m1.438s 00:04:24.996 user 0m0.665s 00:04:24.996 sys 0m0.799s 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.996 14:09:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:24.996 14:09:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:24.996 ************************************ 00:04:24.996 END TEST hugepages 00:04:24.996 ************************************ 00:04:24.996 00:04:24.996 real 0m6.216s 00:04:24.996 user 0m2.854s 00:04:24.996 sys 0m3.338s 00:04:24.996 14:09:44 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.996 14:09:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.255 14:09:44 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.255 14:09:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.255 14:09:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.255 14:09:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.255 ************************************ 00:04:25.255 START TEST driver 00:04:25.255 ************************************ 00:04:25.255 14:09:44 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:25.255 * Looking for test storage... 00:04:25.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:25.255 14:09:44 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.255 14:09:44 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.255 14:09:44 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.822 14:09:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:31.822 14:09:50 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.822 14:09:50 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.822 14:09:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:31.822 ************************************ 00:04:31.822 START TEST guess_driver 00:04:31.822 ************************************ 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:31.822 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:31.822 Looking for driver=uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.822 14:09:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.822 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:31.822 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:31.822 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:32.391 14:09:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.391 14:09:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:32.391 14:09:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:32.391 14:09:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.391 14:09:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.954 00:04:38.954 real 0m7.223s 00:04:38.954 user 0m0.810s 00:04:38.954 sys 0m1.493s 00:04:38.954 14:09:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.954 14:09:57 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.954 ************************************ 00:04:38.954 END TEST guess_driver 00:04:38.954 ************************************ 00:04:38.954 ************************************ 00:04:38.954 END TEST driver 00:04:38.954 ************************************ 00:04:38.954 00:04:38.954 real 0m13.269s 00:04:38.954 user 0m1.159s 00:04:38.954 sys 0m2.285s 00:04:38.954 14:09:58 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.954 14:09:58 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.954 14:09:58 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.954 14:09:58 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.954 14:09:58 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.954 14:09:58 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.954 ************************************ 00:04:38.954 START TEST devices 00:04:38.954 ************************************ 00:04:38.954 14:09:58 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.954 * Looking for test storage... 00:04:38.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.954 14:09:58 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.954 14:09:58 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.954 14:09:58 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.954 14:09:58 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:39.889 14:09:59 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:39.889 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:39.889 No valid GPT data, bailing 00:04:39.889 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.889 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:39.889 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:39.889 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:39.889 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:39.890 No valid GPT data, bailing 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:39.890 No valid GPT data, bailing 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:39.890 No valid GPT data, bailing 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:39.890 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:39.890 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:40.149 No valid GPT data, bailing 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:40.149 No valid GPT data, bailing 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:40.149 14:09:59 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:40.149 14:09:59 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:40.149 14:09:59 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:40.149 14:09:59 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:40.149 14:09:59 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:40.149 14:09:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:40.149 ************************************ 00:04:40.149 START TEST nvme_mount 00:04:40.149 ************************************ 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.149 14:09:59 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.085 Creating new GPT entries in memory. 00:04:41.085 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.085 other utilities. 00:04:41.085 14:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.085 14:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.085 14:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.085 14:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.085 14:10:00 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:42.460 Creating new GPT entries in memory. 00:04:42.460 The operation has completed successfully. 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59408 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.460 14:10:01 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.460 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.723 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.723 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.724 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.724 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.724 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.724 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.991 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:42.991 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.250 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.250 14:10:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.509 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.509 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.509 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.509 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.509 14:10:03 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:43.767 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.025 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.284 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.284 14:10:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.543 14:10:04 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:44.802 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.061 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.061 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.061 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.061 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.062 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.062 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.320 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:45.321 14:10:04 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.580 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:45.580 00:04:45.580 real 0m5.436s 00:04:45.580 user 0m1.522s 00:04:45.580 sys 0m1.577s 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.580 14:10:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:45.580 ************************************ 00:04:45.580 END TEST nvme_mount 00:04:45.580 ************************************ 00:04:45.580 14:10:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:45.580 14:10:05 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.580 14:10:05 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.580 14:10:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:45.580 ************************************ 00:04:45.580 START TEST dm_mount 00:04:45.580 ************************************ 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:45.580 14:10:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:46.955 Creating new GPT entries in memory. 00:04:46.955 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:46.955 other utilities. 00:04:46.955 14:10:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:46.955 14:10:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:46.955 14:10:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:46.955 14:10:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:46.955 14:10:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:47.892 Creating new GPT entries in memory. 00:04:47.892 The operation has completed successfully. 00:04:47.892 14:10:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:47.892 14:10:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:47.892 14:10:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:47.892 14:10:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:47.892 14:10:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:48.829 The operation has completed successfully. 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60048 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.829 14:10:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.089 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.349 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.349 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.349 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.349 14:10:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.608 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:49.608 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.867 14:10:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.126 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.385 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.385 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.385 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.385 14:10:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.644 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:50.644 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:50.903 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:50.903 00:04:50.903 real 0m5.217s 00:04:50.903 user 0m1.027s 00:04:50.903 sys 0m1.087s 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.903 ************************************ 00:04:50.903 14:10:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.903 END TEST dm_mount 00:04:50.903 ************************************ 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.903 14:10:10 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:51.177 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.177 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:51.177 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:51.177 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:51.177 14:10:10 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:51.177 00:04:51.177 real 0m12.733s 00:04:51.177 user 0m3.510s 00:04:51.177 sys 0m3.475s 00:04:51.177 14:10:10 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.177 ************************************ 00:04:51.177 END TEST devices 00:04:51.177 ************************************ 00:04:51.177 14:10:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:51.177 00:04:51.177 real 0m44.342s 00:04:51.177 user 0m10.594s 00:04:51.177 sys 0m13.124s 00:04:51.177 14:10:10 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.177 ************************************ 00:04:51.177 14:10:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:51.177 END TEST setup.sh 00:04:51.177 ************************************ 00:04:51.177 14:10:10 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:51.756 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:52.324 Hugepages 00:04:52.324 node hugesize free / total 00:04:52.324 node0 1048576kB 0 / 0 00:04:52.324 node0 2048kB 2048 / 2048 00:04:52.324 00:04:52.325 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:52.325 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:52.325 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:52.583 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:52.583 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:52.583 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:52.583 14:10:12 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.583 14:10:12 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.583 14:10:12 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.583 14:10:12 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:53.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.728 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.728 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.728 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:53.991 14:10:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:54.927 14:10:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:54.927 14:10:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:54.927 14:10:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:54.927 14:10:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:54.927 14:10:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:54.927 14:10:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:54.927 14:10:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:54.927 14:10:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:54.927 14:10:14 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:54.927 14:10:14 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:04:54.927 14:10:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:54.927 14:10:14 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:55.186 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.445 Waiting for block devices as requested 00:04:55.445 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.704 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.704 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:55.704 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:00.979 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:00.979 14:10:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.979 14:10:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1557 -- # continue 00:05:00.979 14:10:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.979 14:10:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1557 -- # continue 00:05:00.979 14:10:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.979 14:10:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1557 -- # continue 00:05:00.979 14:10:20 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:05:00.979 14:10:20 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.979 14:10:20 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.979 14:10:20 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.979 14:10:20 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.979 14:10:20 -- common/autotest_common.sh@1557 -- # continue 00:05:00.979 14:10:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:00.979 14:10:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:00.979 14:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:00.979 14:10:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:00.980 14:10:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:00.980 14:10:20 -- common/autotest_common.sh@10 -- # set +x 00:05:00.980 14:10:20 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.548 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.116 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.116 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.116 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.116 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.116 14:10:21 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:02.116 14:10:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:02.116 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:05:02.375 14:10:21 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:02.375 14:10:21 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:02.375 14:10:21 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:02.375 14:10:21 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:02.375 14:10:21 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:02.375 14:10:21 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:02.375 14:10:21 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:02.375 14:10:21 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:02.375 14:10:21 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.375 14:10:21 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.375 14:10:21 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:02.375 14:10:21 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:05:02.375 14:10:21 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:02.375 14:10:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.375 14:10:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:02.375 14:10:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:02.375 14:10:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.375 14:10:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.375 14:10:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:02.375 14:10:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:02.375 14:10:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.376 14:10:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.376 14:10:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:02.376 14:10:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:02.376 14:10:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.376 14:10:21 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:02.376 14:10:21 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:02.376 14:10:21 -- common/autotest_common.sh@1580 -- # device=0x0010 00:05:02.376 14:10:21 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:02.376 14:10:21 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:02.376 14:10:22 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:02.376 14:10:22 -- common/autotest_common.sh@1593 -- # return 0 00:05:02.376 14:10:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:02.376 14:10:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:02.376 14:10:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.376 14:10:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:02.376 14:10:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:02.376 14:10:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:02.376 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:02.376 14:10:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:02.376 14:10:22 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:02.376 14:10:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.376 14:10:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.376 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:05:02.376 ************************************ 00:05:02.376 START TEST env 00:05:02.376 ************************************ 00:05:02.376 14:10:22 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:02.376 * Looking for test storage... 00:05:02.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:02.376 14:10:22 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:02.376 14:10:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.376 14:10:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.376 14:10:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.376 ************************************ 00:05:02.376 START TEST env_memory 00:05:02.376 ************************************ 00:05:02.376 14:10:22 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:02.376 00:05:02.376 00:05:02.376 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.376 http://cunit.sourceforge.net/ 00:05:02.376 00:05:02.376 00:05:02.376 Suite: memory 00:05:02.635 Test: alloc and free memory map ...[2024-07-26 14:10:22.180865] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:02.635 passed 00:05:02.635 Test: mem map translation ...[2024-07-26 14:10:22.233825] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:02.635 [2024-07-26 14:10:22.233942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:02.635 [2024-07-26 14:10:22.234054] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:02.635 [2024-07-26 14:10:22.234093] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:02.635 passed 00:05:02.635 Test: mem map registration ...[2024-07-26 14:10:22.341650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:02.635 [2024-07-26 14:10:22.341734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:02.635 passed 00:05:02.894 Test: mem map adjacent registrations ...passed 00:05:02.894 00:05:02.894 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.894 suites 1 1 n/a 0 0 00:05:02.894 tests 4 4 4 0 0 00:05:02.894 asserts 152 152 152 0 n/a 00:05:02.894 00:05:02.894 Elapsed time = 0.331 seconds 00:05:02.894 ************************************ 00:05:02.894 END TEST env_memory 00:05:02.894 ************************************ 00:05:02.894 00:05:02.894 real 0m0.369s 00:05:02.894 user 0m0.336s 00:05:02.894 sys 0m0.026s 00:05:02.894 14:10:22 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.894 14:10:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:02.894 14:10:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:02.894 14:10:22 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.894 14:10:22 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.894 14:10:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.894 ************************************ 00:05:02.894 START TEST env_vtophys 00:05:02.894 ************************************ 00:05:02.894 14:10:22 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:02.894 EAL: lib.eal log level changed from notice to debug 00:05:02.894 EAL: Detected lcore 0 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 1 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 2 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 3 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 4 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 5 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 6 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 7 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 8 as core 0 on socket 0 00:05:02.894 EAL: Detected lcore 9 as core 0 on socket 0 00:05:02.894 EAL: Maximum logical cores by configuration: 128 00:05:02.894 EAL: Detected CPU lcores: 10 00:05:02.894 EAL: Detected NUMA nodes: 1 00:05:02.894 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:02.894 EAL: Detected shared linkage of DPDK 00:05:02.894 EAL: No shared files mode enabled, IPC will be disabled 00:05:02.894 EAL: Selected IOVA mode 'PA' 00:05:02.894 EAL: Probing VFIO support... 00:05:02.894 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:02.894 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:02.894 EAL: Ask a virtual area of 0x2e000 bytes 00:05:02.894 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:02.894 EAL: Setting up physically contiguous memory... 00:05:02.894 EAL: Setting maximum number of open files to 524288 00:05:02.894 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:02.894 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:02.894 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.894 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:02.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.894 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.894 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:02.894 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:02.894 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.894 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:02.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.894 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.894 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:02.894 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:02.894 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.894 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:02.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.894 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.894 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:02.894 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:02.894 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.894 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:02.894 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.894 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.894 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:02.894 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:02.894 EAL: Hugepages will be freed exactly as allocated. 00:05:02.894 EAL: No shared files mode enabled, IPC is disabled 00:05:02.894 EAL: No shared files mode enabled, IPC is disabled 00:05:03.153 EAL: TSC frequency is ~2200000 KHz 00:05:03.153 EAL: Main lcore 0 is ready (tid=7fc9adaf5a40;cpuset=[0]) 00:05:03.153 EAL: Trying to obtain current memory policy. 00:05:03.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.153 EAL: Restoring previous memory policy: 0 00:05:03.153 EAL: request: mp_malloc_sync 00:05:03.153 EAL: No shared files mode enabled, IPC is disabled 00:05:03.153 EAL: Heap on socket 0 was expanded by 2MB 00:05:03.153 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:03.153 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.153 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.153 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:03.153 00:05:03.153 00:05:03.153 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.153 http://cunit.sourceforge.net/ 00:05:03.153 00:05:03.153 00:05:03.153 Suite: components_suite 00:05:03.412 Test: vtophys_malloc_test ...passed 00:05:03.412 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.412 EAL: Restoring previous memory policy: 4 00:05:03.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.412 EAL: request: mp_malloc_sync 00:05:03.412 EAL: No shared files mode enabled, IPC is disabled 00:05:03.412 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.412 EAL: request: mp_malloc_sync 00:05:03.412 EAL: No shared files mode enabled, IPC is disabled 00:05:03.412 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.412 EAL: Trying to obtain current memory policy. 00:05:03.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.413 EAL: Restoring previous memory policy: 4 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.413 EAL: Trying to obtain current memory policy. 00:05:03.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.413 EAL: Restoring previous memory policy: 4 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.413 EAL: Trying to obtain current memory policy. 00:05:03.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.413 EAL: Restoring previous memory policy: 4 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.413 EAL: Trying to obtain current memory policy. 00:05:03.413 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.413 EAL: Restoring previous memory policy: 4 00:05:03.413 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.413 EAL: request: mp_malloc_sync 00:05:03.413 EAL: No shared files mode enabled, IPC is disabled 00:05:03.413 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.681 EAL: request: mp_malloc_sync 00:05:03.681 EAL: No shared files mode enabled, IPC is disabled 00:05:03.681 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.681 EAL: Trying to obtain current memory policy. 00:05:03.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.681 EAL: Restoring previous memory policy: 4 00:05:03.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.681 EAL: request: mp_malloc_sync 00:05:03.681 EAL: No shared files mode enabled, IPC is disabled 00:05:03.681 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.681 EAL: request: mp_malloc_sync 00:05:03.681 EAL: No shared files mode enabled, IPC is disabled 00:05:03.681 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.681 EAL: Trying to obtain current memory policy. 00:05:03.681 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.681 EAL: Restoring previous memory policy: 4 00:05:03.681 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.681 EAL: request: mp_malloc_sync 00:05:03.681 EAL: No shared files mode enabled, IPC is disabled 00:05:03.681 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.940 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.940 EAL: request: mp_malloc_sync 00:05:03.940 EAL: No shared files mode enabled, IPC is disabled 00:05:03.940 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.199 EAL: Trying to obtain current memory policy. 00:05:04.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.199 EAL: Restoring previous memory policy: 4 00:05:04.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.199 EAL: request: mp_malloc_sync 00:05:04.199 EAL: No shared files mode enabled, IPC is disabled 00:05:04.199 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.458 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.458 EAL: request: mp_malloc_sync 00:05:04.458 EAL: No shared files mode enabled, IPC is disabled 00:05:04.458 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.718 EAL: Trying to obtain current memory policy. 00:05:04.718 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.718 EAL: Restoring previous memory policy: 4 00:05:04.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.718 EAL: request: mp_malloc_sync 00:05:04.718 EAL: No shared files mode enabled, IPC is disabled 00:05:04.718 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.653 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.653 EAL: request: mp_malloc_sync 00:05:05.653 EAL: No shared files mode enabled, IPC is disabled 00:05:05.654 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.220 EAL: Trying to obtain current memory policy. 00:05:06.220 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.220 EAL: Restoring previous memory policy: 4 00:05:06.220 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.220 EAL: request: mp_malloc_sync 00:05:06.220 EAL: No shared files mode enabled, IPC is disabled 00:05:06.220 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.598 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.598 EAL: request: mp_malloc_sync 00:05:07.598 EAL: No shared files mode enabled, IPC is disabled 00:05:07.598 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:08.975 passed 00:05:08.975 00:05:08.975 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.975 suites 1 1 n/a 0 0 00:05:08.975 tests 2 2 2 0 0 00:05:08.975 asserts 5411 5411 5411 0 n/a 00:05:08.975 00:05:08.975 Elapsed time = 5.684 seconds 00:05:08.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.975 EAL: request: mp_malloc_sync 00:05:08.975 EAL: No shared files mode enabled, IPC is disabled 00:05:08.975 EAL: Heap on socket 0 was shrunk by 2MB 00:05:08.975 EAL: No shared files mode enabled, IPC is disabled 00:05:08.975 EAL: No shared files mode enabled, IPC is disabled 00:05:08.975 EAL: No shared files mode enabled, IPC is disabled 00:05:08.975 ************************************ 00:05:08.975 END TEST env_vtophys 00:05:08.975 ************************************ 00:05:08.975 00:05:08.975 real 0m5.997s 00:05:08.975 user 0m5.273s 00:05:08.975 sys 0m0.578s 00:05:08.975 14:10:28 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.975 14:10:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:08.975 14:10:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.975 14:10:28 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.975 14:10:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.975 14:10:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.975 ************************************ 00:05:08.975 START TEST env_pci 00:05:08.975 ************************************ 00:05:08.975 14:10:28 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:08.975 00:05:08.975 00:05:08.975 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.975 http://cunit.sourceforge.net/ 00:05:08.975 00:05:08.975 00:05:08.975 Suite: pci 00:05:08.975 Test: pci_hook ...[2024-07-26 14:10:28.611615] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61860 has claimed it 00:05:08.975 passed 00:05:08.975 00:05:08.975 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.975 suites 1 1 n/a 0 0 00:05:08.975 tests 1 1 1 0 0 00:05:08.975 asserts 25 25 25 0 n/a 00:05:08.975 00:05:08.975 Elapsed time = 0.007 seconds 00:05:08.975 EAL: Cannot find device (10000:00:01.0) 00:05:08.975 EAL: Failed to attach device on primary process 00:05:08.975 ************************************ 00:05:08.975 END TEST env_pci 00:05:08.975 ************************************ 00:05:08.975 00:05:08.975 real 0m0.083s 00:05:08.975 user 0m0.040s 00:05:08.975 sys 0m0.043s 00:05:08.975 14:10:28 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.975 14:10:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.975 14:10:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.975 14:10:28 env -- env/env.sh@15 -- # uname 00:05:08.975 14:10:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.975 14:10:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.975 14:10:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.975 14:10:28 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:08.975 14:10:28 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.975 14:10:28 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.975 ************************************ 00:05:08.975 START TEST env_dpdk_post_init 00:05:08.975 ************************************ 00:05:08.975 14:10:28 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:09.234 EAL: Detected CPU lcores: 10 00:05:09.234 EAL: Detected NUMA nodes: 1 00:05:09.234 EAL: Detected shared linkage of DPDK 00:05:09.234 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.234 EAL: Selected IOVA mode 'PA' 00:05:09.234 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:09.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:09.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:09.234 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:09.234 Starting DPDK initialization... 00:05:09.234 Starting SPDK post initialization... 00:05:09.234 SPDK NVMe probe 00:05:09.234 Attaching to 0000:00:10.0 00:05:09.234 Attaching to 0000:00:11.0 00:05:09.234 Attaching to 0000:00:12.0 00:05:09.234 Attaching to 0000:00:13.0 00:05:09.234 Attached to 0000:00:10.0 00:05:09.234 Attached to 0000:00:11.0 00:05:09.234 Attached to 0000:00:13.0 00:05:09.234 Attached to 0000:00:12.0 00:05:09.234 Cleaning up... 00:05:09.493 ************************************ 00:05:09.493 END TEST env_dpdk_post_init 00:05:09.493 ************************************ 00:05:09.493 00:05:09.493 real 0m0.283s 00:05:09.493 user 0m0.104s 00:05:09.493 sys 0m0.083s 00:05:09.493 14:10:28 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.493 14:10:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.494 14:10:29 env -- env/env.sh@26 -- # uname 00:05:09.494 14:10:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:09.494 14:10:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.494 14:10:29 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.494 14:10:29 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.494 14:10:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.494 ************************************ 00:05:09.494 START TEST env_mem_callbacks 00:05:09.494 ************************************ 00:05:09.494 14:10:29 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:09.494 EAL: Detected CPU lcores: 10 00:05:09.494 EAL: Detected NUMA nodes: 1 00:05:09.494 EAL: Detected shared linkage of DPDK 00:05:09.494 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:09.494 EAL: Selected IOVA mode 'PA' 00:05:09.494 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:09.494 00:05:09.494 00:05:09.494 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.494 http://cunit.sourceforge.net/ 00:05:09.494 00:05:09.494 00:05:09.494 Suite: memory 00:05:09.494 Test: test ... 00:05:09.494 register 0x200000200000 2097152 00:05:09.494 malloc 3145728 00:05:09.494 register 0x200000400000 4194304 00:05:09.494 buf 0x2000004fffc0 len 3145728 PASSED 00:05:09.494 malloc 64 00:05:09.494 buf 0x2000004ffec0 len 64 PASSED 00:05:09.494 malloc 4194304 00:05:09.494 register 0x200000800000 6291456 00:05:09.494 buf 0x2000009fffc0 len 4194304 PASSED 00:05:09.494 free 0x2000004fffc0 3145728 00:05:09.494 free 0x2000004ffec0 64 00:05:09.494 unregister 0x200000400000 4194304 PASSED 00:05:09.494 free 0x2000009fffc0 4194304 00:05:09.753 unregister 0x200000800000 6291456 PASSED 00:05:09.753 malloc 8388608 00:05:09.753 register 0x200000400000 10485760 00:05:09.753 buf 0x2000005fffc0 len 8388608 PASSED 00:05:09.753 free 0x2000005fffc0 8388608 00:05:09.753 unregister 0x200000400000 10485760 PASSED 00:05:09.753 passed 00:05:09.753 00:05:09.753 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.753 suites 1 1 n/a 0 0 00:05:09.753 tests 1 1 1 0 0 00:05:09.753 asserts 15 15 15 0 n/a 00:05:09.753 00:05:09.753 Elapsed time = 0.052 seconds 00:05:09.753 00:05:09.753 real 0m0.252s 00:05:09.753 user 0m0.087s 00:05:09.753 sys 0m0.060s 00:05:09.753 ************************************ 00:05:09.753 END TEST env_mem_callbacks 00:05:09.753 ************************************ 00:05:09.753 14:10:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.753 14:10:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 00:05:09.753 real 0m7.321s 00:05:09.753 user 0m5.951s 00:05:09.753 sys 0m0.991s 00:05:09.753 ************************************ 00:05:09.753 END TEST env 00:05:09.753 ************************************ 00:05:09.753 14:10:29 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.753 14:10:29 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 14:10:29 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.753 14:10:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.753 14:10:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.753 14:10:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.753 ************************************ 00:05:09.753 START TEST rpc 00:05:09.753 ************************************ 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:09.753 * Looking for test storage... 00:05:09.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.753 14:10:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:09.753 14:10:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=61979 00:05:09.753 14:10:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.753 14:10:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 61979 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@831 -- # '[' -z 61979 ']' 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.753 14:10:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.012 [2024-07-26 14:10:29.573224] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:10.012 [2024-07-26 14:10:29.573385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61979 ] 00:05:10.012 [2024-07-26 14:10:29.729324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.270 [2024-07-26 14:10:29.892477] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:10.270 [2024-07-26 14:10:29.892536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61979' to capture a snapshot of events at runtime. 00:05:10.270 [2024-07-26 14:10:29.892585] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:10.270 [2024-07-26 14:10:29.892595] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:10.270 [2024-07-26 14:10:29.892606] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61979 for offline analysis/debug. 00:05:10.270 [2024-07-26 14:10:29.892645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.839 14:10:30 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.839 14:10:30 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.839 14:10:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.839 14:10:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:10.839 14:10:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:10.839 14:10:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:10.839 14:10:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.839 14:10:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.839 14:10:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.839 ************************************ 00:05:10.839 START TEST rpc_integrity 00:05:10.839 ************************************ 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:10.839 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.839 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.839 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.839 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.839 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.839 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.099 { 00:05:11.099 "name": "Malloc0", 00:05:11.099 "aliases": [ 00:05:11.099 "3eb6b0bd-007e-49c3-a54c-58d212de0498" 00:05:11.099 ], 00:05:11.099 "product_name": "Malloc disk", 00:05:11.099 "block_size": 512, 00:05:11.099 "num_blocks": 16384, 00:05:11.099 "uuid": "3eb6b0bd-007e-49c3-a54c-58d212de0498", 00:05:11.099 "assigned_rate_limits": { 00:05:11.099 "rw_ios_per_sec": 0, 00:05:11.099 "rw_mbytes_per_sec": 0, 00:05:11.099 "r_mbytes_per_sec": 0, 00:05:11.099 "w_mbytes_per_sec": 0 00:05:11.099 }, 00:05:11.099 "claimed": false, 00:05:11.099 "zoned": false, 00:05:11.099 "supported_io_types": { 00:05:11.099 "read": true, 00:05:11.099 "write": true, 00:05:11.099 "unmap": true, 00:05:11.099 "flush": true, 00:05:11.099 "reset": true, 00:05:11.099 "nvme_admin": false, 00:05:11.099 "nvme_io": false, 00:05:11.099 "nvme_io_md": false, 00:05:11.099 "write_zeroes": true, 00:05:11.099 "zcopy": true, 00:05:11.099 "get_zone_info": false, 00:05:11.099 "zone_management": false, 00:05:11.099 "zone_append": false, 00:05:11.099 "compare": false, 00:05:11.099 "compare_and_write": false, 00:05:11.099 "abort": true, 00:05:11.099 "seek_hole": false, 00:05:11.099 "seek_data": false, 00:05:11.099 "copy": true, 00:05:11.099 "nvme_iov_md": false 00:05:11.099 }, 00:05:11.099 "memory_domains": [ 00:05:11.099 { 00:05:11.099 "dma_device_id": "system", 00:05:11.099 "dma_device_type": 1 00:05:11.099 }, 00:05:11.099 { 00:05:11.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.099 "dma_device_type": 2 00:05:11.099 } 00:05:11.099 ], 00:05:11.099 "driver_specific": {} 00:05:11.099 } 00:05:11.099 ]' 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.099 [2024-07-26 14:10:30.683432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:11.099 [2024-07-26 14:10:30.683512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.099 [2024-07-26 14:10:30.683549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:11.099 [2024-07-26 14:10:30.683564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.099 [2024-07-26 14:10:30.685881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.099 [2024-07-26 14:10:30.685959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.099 Passthru0 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.099 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:11.099 { 00:05:11.099 "name": "Malloc0", 00:05:11.099 "aliases": [ 00:05:11.099 "3eb6b0bd-007e-49c3-a54c-58d212de0498" 00:05:11.099 ], 00:05:11.099 "product_name": "Malloc disk", 00:05:11.099 "block_size": 512, 00:05:11.099 "num_blocks": 16384, 00:05:11.099 "uuid": "3eb6b0bd-007e-49c3-a54c-58d212de0498", 00:05:11.099 "assigned_rate_limits": { 00:05:11.099 "rw_ios_per_sec": 0, 00:05:11.099 "rw_mbytes_per_sec": 0, 00:05:11.099 "r_mbytes_per_sec": 0, 00:05:11.099 "w_mbytes_per_sec": 0 00:05:11.099 }, 00:05:11.099 "claimed": true, 00:05:11.099 "claim_type": "exclusive_write", 00:05:11.099 "zoned": false, 00:05:11.099 "supported_io_types": { 00:05:11.099 "read": true, 00:05:11.099 "write": true, 00:05:11.099 "unmap": true, 00:05:11.099 "flush": true, 00:05:11.099 "reset": true, 00:05:11.099 "nvme_admin": false, 00:05:11.099 "nvme_io": false, 00:05:11.099 "nvme_io_md": false, 00:05:11.099 "write_zeroes": true, 00:05:11.099 "zcopy": true, 00:05:11.099 "get_zone_info": false, 00:05:11.099 "zone_management": false, 00:05:11.099 "zone_append": false, 00:05:11.099 "compare": false, 00:05:11.099 "compare_and_write": false, 00:05:11.099 "abort": true, 00:05:11.099 "seek_hole": false, 00:05:11.099 "seek_data": false, 00:05:11.099 "copy": true, 00:05:11.099 "nvme_iov_md": false 00:05:11.099 }, 00:05:11.099 "memory_domains": [ 00:05:11.099 { 00:05:11.099 "dma_device_id": "system", 00:05:11.099 "dma_device_type": 1 00:05:11.099 }, 00:05:11.099 { 00:05:11.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.099 "dma_device_type": 2 00:05:11.099 } 00:05:11.099 ], 00:05:11.099 "driver_specific": {} 00:05:11.099 }, 00:05:11.099 { 00:05:11.099 "name": "Passthru0", 00:05:11.099 "aliases": [ 00:05:11.099 "484d89fc-8457-55c0-be10-714439f4086c" 00:05:11.099 ], 00:05:11.099 "product_name": "passthru", 00:05:11.099 "block_size": 512, 00:05:11.099 "num_blocks": 16384, 00:05:11.099 "uuid": "484d89fc-8457-55c0-be10-714439f4086c", 00:05:11.099 "assigned_rate_limits": { 00:05:11.099 "rw_ios_per_sec": 0, 00:05:11.099 "rw_mbytes_per_sec": 0, 00:05:11.099 "r_mbytes_per_sec": 0, 00:05:11.099 "w_mbytes_per_sec": 0 00:05:11.099 }, 00:05:11.099 "claimed": false, 00:05:11.099 "zoned": false, 00:05:11.099 "supported_io_types": { 00:05:11.099 "read": true, 00:05:11.099 "write": true, 00:05:11.099 "unmap": true, 00:05:11.099 "flush": true, 00:05:11.099 "reset": true, 00:05:11.099 "nvme_admin": false, 00:05:11.099 "nvme_io": false, 00:05:11.099 "nvme_io_md": false, 00:05:11.099 "write_zeroes": true, 00:05:11.099 "zcopy": true, 00:05:11.099 "get_zone_info": false, 00:05:11.099 "zone_management": false, 00:05:11.099 "zone_append": false, 00:05:11.099 "compare": false, 00:05:11.099 "compare_and_write": false, 00:05:11.099 "abort": true, 00:05:11.099 "seek_hole": false, 00:05:11.099 "seek_data": false, 00:05:11.099 "copy": true, 00:05:11.099 "nvme_iov_md": false 00:05:11.099 }, 00:05:11.099 "memory_domains": [ 00:05:11.099 { 00:05:11.099 "dma_device_id": "system", 00:05:11.099 "dma_device_type": 1 00:05:11.099 }, 00:05:11.099 { 00:05:11.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.099 "dma_device_type": 2 00:05:11.099 } 00:05:11.099 ], 00:05:11.099 "driver_specific": { 00:05:11.099 "passthru": { 00:05:11.099 "name": "Passthru0", 00:05:11.099 "base_bdev_name": "Malloc0" 00:05:11.099 } 00:05:11.099 } 00:05:11.099 } 00:05:11.099 ]' 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:11.099 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.100 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.100 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.100 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.100 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:11.100 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:11.361 ************************************ 00:05:11.361 END TEST rpc_integrity 00:05:11.361 ************************************ 00:05:11.361 14:10:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:11.361 00:05:11.361 real 0m0.356s 00:05:11.361 user 0m0.235s 00:05:11.361 sys 0m0.033s 00:05:11.361 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.361 14:10:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 14:10:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:11.361 14:10:30 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.361 14:10:30 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.361 14:10:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 ************************************ 00:05:11.361 START TEST rpc_plugins 00:05:11.361 ************************************ 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:11.361 14:10:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.361 14:10:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:11.361 14:10:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 14:10:30 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.361 14:10:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:11.361 { 00:05:11.361 "name": "Malloc1", 00:05:11.361 "aliases": [ 00:05:11.361 "a460df79-fa08-4181-b971-f696caa19ae0" 00:05:11.361 ], 00:05:11.361 "product_name": "Malloc disk", 00:05:11.361 "block_size": 4096, 00:05:11.361 "num_blocks": 256, 00:05:11.361 "uuid": "a460df79-fa08-4181-b971-f696caa19ae0", 00:05:11.361 "assigned_rate_limits": { 00:05:11.361 "rw_ios_per_sec": 0, 00:05:11.361 "rw_mbytes_per_sec": 0, 00:05:11.361 "r_mbytes_per_sec": 0, 00:05:11.361 "w_mbytes_per_sec": 0 00:05:11.361 }, 00:05:11.361 "claimed": false, 00:05:11.361 "zoned": false, 00:05:11.361 "supported_io_types": { 00:05:11.361 "read": true, 00:05:11.361 "write": true, 00:05:11.361 "unmap": true, 00:05:11.361 "flush": true, 00:05:11.361 "reset": true, 00:05:11.361 "nvme_admin": false, 00:05:11.361 "nvme_io": false, 00:05:11.361 "nvme_io_md": false, 00:05:11.361 "write_zeroes": true, 00:05:11.361 "zcopy": true, 00:05:11.361 "get_zone_info": false, 00:05:11.361 "zone_management": false, 00:05:11.361 "zone_append": false, 00:05:11.361 "compare": false, 00:05:11.361 "compare_and_write": false, 00:05:11.361 "abort": true, 00:05:11.361 "seek_hole": false, 00:05:11.361 "seek_data": false, 00:05:11.361 "copy": true, 00:05:11.361 "nvme_iov_md": false 00:05:11.361 }, 00:05:11.361 "memory_domains": [ 00:05:11.361 { 00:05:11.361 "dma_device_id": "system", 00:05:11.361 "dma_device_type": 1 00:05:11.361 }, 00:05:11.361 { 00:05:11.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.361 "dma_device_type": 2 00:05:11.361 } 00:05:11.361 ], 00:05:11.361 "driver_specific": {} 00:05:11.361 } 00:05:11.361 ]' 00:05:11.361 14:10:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:11.361 ************************************ 00:05:11.361 END TEST rpc_plugins 00:05:11.361 ************************************ 00:05:11.361 14:10:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:11.361 00:05:11.361 real 0m0.164s 00:05:11.361 user 0m0.112s 00:05:11.361 sys 0m0.015s 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.361 14:10:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:11.625 14:10:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:11.625 14:10:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.625 14:10:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.625 14:10:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.625 ************************************ 00:05:11.625 START TEST rpc_trace_cmd_test 00:05:11.625 ************************************ 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:11.625 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61979", 00:05:11.625 "tpoint_group_mask": "0x8", 00:05:11.625 "iscsi_conn": { 00:05:11.625 "mask": "0x2", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "scsi": { 00:05:11.625 "mask": "0x4", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "bdev": { 00:05:11.625 "mask": "0x8", 00:05:11.625 "tpoint_mask": "0xffffffffffffffff" 00:05:11.625 }, 00:05:11.625 "nvmf_rdma": { 00:05:11.625 "mask": "0x10", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "nvmf_tcp": { 00:05:11.625 "mask": "0x20", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "ftl": { 00:05:11.625 "mask": "0x40", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "blobfs": { 00:05:11.625 "mask": "0x80", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "dsa": { 00:05:11.625 "mask": "0x200", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "thread": { 00:05:11.625 "mask": "0x400", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "nvme_pcie": { 00:05:11.625 "mask": "0x800", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "iaa": { 00:05:11.625 "mask": "0x1000", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "nvme_tcp": { 00:05:11.625 "mask": "0x2000", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "bdev_nvme": { 00:05:11.625 "mask": "0x4000", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 }, 00:05:11.625 "sock": { 00:05:11.625 "mask": "0x8000", 00:05:11.625 "tpoint_mask": "0x0" 00:05:11.625 } 00:05:11.625 }' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:11.625 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:11.885 ************************************ 00:05:11.885 END TEST rpc_trace_cmd_test 00:05:11.885 ************************************ 00:05:11.885 14:10:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:11.885 00:05:11.885 real 0m0.273s 00:05:11.885 user 0m0.236s 00:05:11.885 sys 0m0.026s 00:05:11.885 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 14:10:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:11.885 14:10:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:11.885 14:10:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:11.885 14:10:31 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.885 14:10:31 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.885 14:10:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 ************************************ 00:05:11.885 START TEST rpc_daemon_integrity 00:05:11.885 ************************************ 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:11.885 { 00:05:11.885 "name": "Malloc2", 00:05:11.885 "aliases": [ 00:05:11.885 "70be0561-388e-456a-aca8-0a72340d3165" 00:05:11.885 ], 00:05:11.885 "product_name": "Malloc disk", 00:05:11.885 "block_size": 512, 00:05:11.885 "num_blocks": 16384, 00:05:11.885 "uuid": "70be0561-388e-456a-aca8-0a72340d3165", 00:05:11.885 "assigned_rate_limits": { 00:05:11.885 "rw_ios_per_sec": 0, 00:05:11.885 "rw_mbytes_per_sec": 0, 00:05:11.885 "r_mbytes_per_sec": 0, 00:05:11.885 "w_mbytes_per_sec": 0 00:05:11.885 }, 00:05:11.885 "claimed": false, 00:05:11.885 "zoned": false, 00:05:11.885 "supported_io_types": { 00:05:11.885 "read": true, 00:05:11.885 "write": true, 00:05:11.885 "unmap": true, 00:05:11.885 "flush": true, 00:05:11.885 "reset": true, 00:05:11.885 "nvme_admin": false, 00:05:11.885 "nvme_io": false, 00:05:11.885 "nvme_io_md": false, 00:05:11.885 "write_zeroes": true, 00:05:11.885 "zcopy": true, 00:05:11.885 "get_zone_info": false, 00:05:11.885 "zone_management": false, 00:05:11.885 "zone_append": false, 00:05:11.885 "compare": false, 00:05:11.885 "compare_and_write": false, 00:05:11.885 "abort": true, 00:05:11.885 "seek_hole": false, 00:05:11.885 "seek_data": false, 00:05:11.885 "copy": true, 00:05:11.885 "nvme_iov_md": false 00:05:11.885 }, 00:05:11.885 "memory_domains": [ 00:05:11.885 { 00:05:11.885 "dma_device_id": "system", 00:05:11.885 "dma_device_type": 1 00:05:11.885 }, 00:05:11.885 { 00:05:11.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:11.885 "dma_device_type": 2 00:05:11.885 } 00:05:11.885 ], 00:05:11.885 "driver_specific": {} 00:05:11.885 } 00:05:11.885 ]' 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:11.885 [2024-07-26 14:10:31.633537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:11.885 [2024-07-26 14:10:31.633606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:11.885 [2024-07-26 14:10:31.633635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:11.885 [2024-07-26 14:10:31.633649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:11.885 [2024-07-26 14:10:31.636119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:11.885 [2024-07-26 14:10:31.636158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:11.885 Passthru0 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.885 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:12.145 { 00:05:12.145 "name": "Malloc2", 00:05:12.145 "aliases": [ 00:05:12.145 "70be0561-388e-456a-aca8-0a72340d3165" 00:05:12.145 ], 00:05:12.145 "product_name": "Malloc disk", 00:05:12.145 "block_size": 512, 00:05:12.145 "num_blocks": 16384, 00:05:12.145 "uuid": "70be0561-388e-456a-aca8-0a72340d3165", 00:05:12.145 "assigned_rate_limits": { 00:05:12.145 "rw_ios_per_sec": 0, 00:05:12.145 "rw_mbytes_per_sec": 0, 00:05:12.145 "r_mbytes_per_sec": 0, 00:05:12.145 "w_mbytes_per_sec": 0 00:05:12.145 }, 00:05:12.145 "claimed": true, 00:05:12.145 "claim_type": "exclusive_write", 00:05:12.145 "zoned": false, 00:05:12.145 "supported_io_types": { 00:05:12.145 "read": true, 00:05:12.145 "write": true, 00:05:12.145 "unmap": true, 00:05:12.145 "flush": true, 00:05:12.145 "reset": true, 00:05:12.145 "nvme_admin": false, 00:05:12.145 "nvme_io": false, 00:05:12.145 "nvme_io_md": false, 00:05:12.145 "write_zeroes": true, 00:05:12.145 "zcopy": true, 00:05:12.145 "get_zone_info": false, 00:05:12.145 "zone_management": false, 00:05:12.145 "zone_append": false, 00:05:12.145 "compare": false, 00:05:12.145 "compare_and_write": false, 00:05:12.145 "abort": true, 00:05:12.145 "seek_hole": false, 00:05:12.145 "seek_data": false, 00:05:12.145 "copy": true, 00:05:12.145 "nvme_iov_md": false 00:05:12.145 }, 00:05:12.145 "memory_domains": [ 00:05:12.145 { 00:05:12.145 "dma_device_id": "system", 00:05:12.145 "dma_device_type": 1 00:05:12.145 }, 00:05:12.145 { 00:05:12.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.145 "dma_device_type": 2 00:05:12.145 } 00:05:12.145 ], 00:05:12.145 "driver_specific": {} 00:05:12.145 }, 00:05:12.145 { 00:05:12.145 "name": "Passthru0", 00:05:12.145 "aliases": [ 00:05:12.145 "4f5380ca-1f4d-5f31-b4fd-b1c2c24f9401" 00:05:12.145 ], 00:05:12.145 "product_name": "passthru", 00:05:12.145 "block_size": 512, 00:05:12.145 "num_blocks": 16384, 00:05:12.145 "uuid": "4f5380ca-1f4d-5f31-b4fd-b1c2c24f9401", 00:05:12.145 "assigned_rate_limits": { 00:05:12.145 "rw_ios_per_sec": 0, 00:05:12.145 "rw_mbytes_per_sec": 0, 00:05:12.145 "r_mbytes_per_sec": 0, 00:05:12.145 "w_mbytes_per_sec": 0 00:05:12.145 }, 00:05:12.145 "claimed": false, 00:05:12.145 "zoned": false, 00:05:12.145 "supported_io_types": { 00:05:12.145 "read": true, 00:05:12.145 "write": true, 00:05:12.145 "unmap": true, 00:05:12.145 "flush": true, 00:05:12.145 "reset": true, 00:05:12.145 "nvme_admin": false, 00:05:12.145 "nvme_io": false, 00:05:12.145 "nvme_io_md": false, 00:05:12.145 "write_zeroes": true, 00:05:12.145 "zcopy": true, 00:05:12.145 "get_zone_info": false, 00:05:12.145 "zone_management": false, 00:05:12.145 "zone_append": false, 00:05:12.145 "compare": false, 00:05:12.145 "compare_and_write": false, 00:05:12.145 "abort": true, 00:05:12.145 "seek_hole": false, 00:05:12.145 "seek_data": false, 00:05:12.145 "copy": true, 00:05:12.145 "nvme_iov_md": false 00:05:12.145 }, 00:05:12.145 "memory_domains": [ 00:05:12.145 { 00:05:12.145 "dma_device_id": "system", 00:05:12.145 "dma_device_type": 1 00:05:12.145 }, 00:05:12.145 { 00:05:12.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.145 "dma_device_type": 2 00:05:12.145 } 00:05:12.145 ], 00:05:12.145 "driver_specific": { 00:05:12.145 "passthru": { 00:05:12.145 "name": "Passthru0", 00:05:12.145 "base_bdev_name": "Malloc2" 00:05:12.145 } 00:05:12.145 } 00:05:12.145 } 00:05:12.145 ]' 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:12.145 ************************************ 00:05:12.145 END TEST rpc_daemon_integrity 00:05:12.145 ************************************ 00:05:12.145 14:10:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:12.146 00:05:12.146 real 0m0.346s 00:05:12.146 user 0m0.220s 00:05:12.146 sys 0m0.039s 00:05:12.146 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.146 14:10:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.146 14:10:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:12.146 14:10:31 rpc -- rpc/rpc.sh@84 -- # killprocess 61979 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@950 -- # '[' -z 61979 ']' 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@954 -- # kill -0 61979 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@955 -- # uname 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61979 00:05:12.146 killing process with pid 61979 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61979' 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@969 -- # kill 61979 00:05:12.146 14:10:31 rpc -- common/autotest_common.sh@974 -- # wait 61979 00:05:14.052 00:05:14.052 real 0m4.240s 00:05:14.052 user 0m5.071s 00:05:14.052 sys 0m0.676s 00:05:14.052 14:10:33 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.052 14:10:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.052 ************************************ 00:05:14.052 END TEST rpc 00:05:14.052 ************************************ 00:05:14.052 14:10:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.052 14:10:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.052 14:10:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.052 14:10:33 -- common/autotest_common.sh@10 -- # set +x 00:05:14.052 ************************************ 00:05:14.052 START TEST skip_rpc 00:05:14.052 ************************************ 00:05:14.052 14:10:33 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:14.052 * Looking for test storage... 00:05:14.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.052 14:10:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.052 14:10:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.052 14:10:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.052 14:10:33 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.052 14:10:33 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.052 14:10:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.052 ************************************ 00:05:14.052 START TEST skip_rpc 00:05:14.052 ************************************ 00:05:14.052 14:10:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:14.053 14:10:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62189 00:05:14.053 14:10:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.053 14:10:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.053 14:10:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.312 [2024-07-26 14:10:33.887518] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:14.312 [2024-07-26 14:10:33.887688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62189 ] 00:05:14.312 [2024-07-26 14:10:34.058607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.572 [2024-07-26 14:10:34.210865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62189 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62189 ']' 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62189 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62189 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62189' 00:05:19.845 killing process with pid 62189 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62189 00:05:19.845 14:10:38 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62189 00:05:21.224 00:05:21.224 real 0m6.800s 00:05:21.224 user 0m6.369s 00:05:21.224 sys 0m0.330s 00:05:21.224 14:10:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.224 14:10:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.224 ************************************ 00:05:21.224 END TEST skip_rpc 00:05:21.224 ************************************ 00:05:21.224 14:10:40 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:21.224 14:10:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.224 14:10:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.224 14:10:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.224 ************************************ 00:05:21.224 START TEST skip_rpc_with_json 00:05:21.224 ************************************ 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62293 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62293 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62293 ']' 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.224 14:10:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.224 [2024-07-26 14:10:40.710153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:21.224 [2024-07-26 14:10:40.710332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62293 ] 00:05:21.224 [2024-07-26 14:10:40.866318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.483 [2024-07-26 14:10:41.022370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.052 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.052 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:22.052 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:22.052 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.053 [2024-07-26 14:10:41.646110] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:22.053 request: 00:05:22.053 { 00:05:22.053 "trtype": "tcp", 00:05:22.053 "method": "nvmf_get_transports", 00:05:22.053 "req_id": 1 00:05:22.053 } 00:05:22.053 Got JSON-RPC error response 00:05:22.053 response: 00:05:22.053 { 00:05:22.053 "code": -19, 00:05:22.053 "message": "No such device" 00:05:22.053 } 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.053 [2024-07-26 14:10:41.658222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.053 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.312 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:22.312 { 00:05:22.312 "subsystems": [ 00:05:22.312 { 00:05:22.312 "subsystem": "keyring", 00:05:22.312 "config": [] 00:05:22.312 }, 00:05:22.312 { 00:05:22.312 "subsystem": "iobuf", 00:05:22.312 "config": [ 00:05:22.312 { 00:05:22.312 "method": "iobuf_set_options", 00:05:22.312 "params": { 00:05:22.312 "small_pool_count": 8192, 00:05:22.312 "large_pool_count": 1024, 00:05:22.312 "small_bufsize": 8192, 00:05:22.312 "large_bufsize": 135168 00:05:22.312 } 00:05:22.312 } 00:05:22.312 ] 00:05:22.312 }, 00:05:22.313 { 00:05:22.313 "subsystem": "sock", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "sock_set_default_impl", 00:05:22.313 "params": { 00:05:22.313 "impl_name": "posix" 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "sock_impl_set_options", 00:05:22.313 "params": { 00:05:22.313 "impl_name": "ssl", 00:05:22.313 "recv_buf_size": 4096, 00:05:22.313 "send_buf_size": 4096, 00:05:22.313 "enable_recv_pipe": true, 00:05:22.313 "enable_quickack": false, 00:05:22.313 "enable_placement_id": 0, 00:05:22.313 "enable_zerocopy_send_server": true, 00:05:22.313 "enable_zerocopy_send_client": false, 00:05:22.313 "zerocopy_threshold": 0, 00:05:22.313 "tls_version": 0, 00:05:22.313 "enable_ktls": false 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "sock_impl_set_options", 00:05:22.313 "params": { 00:05:22.313 "impl_name": "posix", 00:05:22.313 "recv_buf_size": 2097152, 00:05:22.313 "send_buf_size": 2097152, 00:05:22.313 "enable_recv_pipe": true, 00:05:22.313 "enable_quickack": false, 00:05:22.313 "enable_placement_id": 0, 00:05:22.313 "enable_zerocopy_send_server": true, 00:05:22.313 "enable_zerocopy_send_client": false, 00:05:22.313 "zerocopy_threshold": 0, 00:05:22.313 "tls_version": 0, 00:05:22.313 "enable_ktls": false 00:05:22.313 } 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "vmd", 00:05:22.313 "config": [] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "accel", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "accel_set_options", 00:05:22.313 "params": { 00:05:22.313 "small_cache_size": 128, 00:05:22.313 "large_cache_size": 16, 00:05:22.313 "task_count": 2048, 00:05:22.313 "sequence_count": 2048, 00:05:22.313 "buf_count": 2048 00:05:22.313 } 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "bdev", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "bdev_set_options", 00:05:22.313 "params": { 00:05:22.313 "bdev_io_pool_size": 65535, 00:05:22.313 "bdev_io_cache_size": 256, 00:05:22.313 "bdev_auto_examine": true, 00:05:22.313 "iobuf_small_cache_size": 128, 00:05:22.313 "iobuf_large_cache_size": 16 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "bdev_raid_set_options", 00:05:22.313 "params": { 00:05:22.313 "process_window_size_kb": 1024, 00:05:22.313 "process_max_bandwidth_mb_sec": 0 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "bdev_iscsi_set_options", 00:05:22.313 "params": { 00:05:22.313 "timeout_sec": 30 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "bdev_nvme_set_options", 00:05:22.313 "params": { 00:05:22.313 "action_on_timeout": "none", 00:05:22.313 "timeout_us": 0, 00:05:22.313 "timeout_admin_us": 0, 00:05:22.313 "keep_alive_timeout_ms": 10000, 00:05:22.313 "arbitration_burst": 0, 00:05:22.313 "low_priority_weight": 0, 00:05:22.313 "medium_priority_weight": 0, 00:05:22.313 "high_priority_weight": 0, 00:05:22.313 "nvme_adminq_poll_period_us": 10000, 00:05:22.313 "nvme_ioq_poll_period_us": 0, 00:05:22.313 "io_queue_requests": 0, 00:05:22.313 "delay_cmd_submit": true, 00:05:22.313 "transport_retry_count": 4, 00:05:22.313 "bdev_retry_count": 3, 00:05:22.313 "transport_ack_timeout": 0, 00:05:22.313 "ctrlr_loss_timeout_sec": 0, 00:05:22.313 "reconnect_delay_sec": 0, 00:05:22.313 "fast_io_fail_timeout_sec": 0, 00:05:22.313 "disable_auto_failback": false, 00:05:22.313 "generate_uuids": false, 00:05:22.313 "transport_tos": 0, 00:05:22.313 "nvme_error_stat": false, 00:05:22.313 "rdma_srq_size": 0, 00:05:22.313 "io_path_stat": false, 00:05:22.313 "allow_accel_sequence": false, 00:05:22.313 "rdma_max_cq_size": 0, 00:05:22.313 "rdma_cm_event_timeout_ms": 0, 00:05:22.313 "dhchap_digests": [ 00:05:22.313 "sha256", 00:05:22.313 "sha384", 00:05:22.313 "sha512" 00:05:22.313 ], 00:05:22.313 "dhchap_dhgroups": [ 00:05:22.313 "null", 00:05:22.313 "ffdhe2048", 00:05:22.313 "ffdhe3072", 00:05:22.313 "ffdhe4096", 00:05:22.313 "ffdhe6144", 00:05:22.313 "ffdhe8192" 00:05:22.313 ] 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "bdev_nvme_set_hotplug", 00:05:22.313 "params": { 00:05:22.313 "period_us": 100000, 00:05:22.313 "enable": false 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "bdev_wait_for_examine" 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "scsi", 00:05:22.313 "config": null 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "scheduler", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "framework_set_scheduler", 00:05:22.313 "params": { 00:05:22.313 "name": "static" 00:05:22.313 } 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "vhost_scsi", 00:05:22.313 "config": [] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "vhost_blk", 00:05:22.313 "config": [] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "ublk", 00:05:22.313 "config": [] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "nbd", 00:05:22.313 "config": [] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "nvmf", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "nvmf_set_config", 00:05:22.313 "params": { 00:05:22.313 "discovery_filter": "match_any", 00:05:22.313 "admin_cmd_passthru": { 00:05:22.313 "identify_ctrlr": false 00:05:22.313 } 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "nvmf_set_max_subsystems", 00:05:22.313 "params": { 00:05:22.313 "max_subsystems": 1024 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "nvmf_set_crdt", 00:05:22.313 "params": { 00:05:22.313 "crdt1": 0, 00:05:22.313 "crdt2": 0, 00:05:22.313 "crdt3": 0 00:05:22.313 } 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "method": "nvmf_create_transport", 00:05:22.313 "params": { 00:05:22.313 "trtype": "TCP", 00:05:22.313 "max_queue_depth": 128, 00:05:22.313 "max_io_qpairs_per_ctrlr": 127, 00:05:22.313 "in_capsule_data_size": 4096, 00:05:22.313 "max_io_size": 131072, 00:05:22.313 "io_unit_size": 131072, 00:05:22.313 "max_aq_depth": 128, 00:05:22.313 "num_shared_buffers": 511, 00:05:22.313 "buf_cache_size": 4294967295, 00:05:22.313 "dif_insert_or_strip": false, 00:05:22.313 "zcopy": false, 00:05:22.313 "c2h_success": true, 00:05:22.313 "sock_priority": 0, 00:05:22.313 "abort_timeout_sec": 1, 00:05:22.313 "ack_timeout": 0, 00:05:22.313 "data_wr_pool_size": 0 00:05:22.313 } 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 }, 00:05:22.313 { 00:05:22.313 "subsystem": "iscsi", 00:05:22.313 "config": [ 00:05:22.313 { 00:05:22.313 "method": "iscsi_set_options", 00:05:22.313 "params": { 00:05:22.313 "node_base": "iqn.2016-06.io.spdk", 00:05:22.313 "max_sessions": 128, 00:05:22.313 "max_connections_per_session": 2, 00:05:22.313 "max_queue_depth": 64, 00:05:22.313 "default_time2wait": 2, 00:05:22.313 "default_time2retain": 20, 00:05:22.313 "first_burst_length": 8192, 00:05:22.313 "immediate_data": true, 00:05:22.313 "allow_duplicated_isid": false, 00:05:22.313 "error_recovery_level": 0, 00:05:22.313 "nop_timeout": 60, 00:05:22.313 "nop_in_interval": 30, 00:05:22.313 "disable_chap": false, 00:05:22.313 "require_chap": false, 00:05:22.313 "mutual_chap": false, 00:05:22.313 "chap_group": 0, 00:05:22.313 "max_large_datain_per_connection": 64, 00:05:22.313 "max_r2t_per_connection": 4, 00:05:22.313 "pdu_pool_size": 36864, 00:05:22.313 "immediate_data_pool_size": 16384, 00:05:22.313 "data_out_pool_size": 2048 00:05:22.313 } 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 } 00:05:22.313 ] 00:05:22.313 } 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62293 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62293 ']' 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62293 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62293 00:05:22.313 killing process with pid 62293 00:05:22.313 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.314 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.314 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62293' 00:05:22.314 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62293 00:05:22.314 14:10:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62293 00:05:24.218 14:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62333 00:05:24.218 14:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:24.218 14:10:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62333 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62333 ']' 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62333 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62333 00:05:29.489 killing process with pid 62333 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62333' 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62333 00:05:29.489 14:10:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62333 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.865 00:05:30.865 real 0m9.730s 00:05:30.865 user 0m9.424s 00:05:30.865 sys 0m0.654s 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.865 ************************************ 00:05:30.865 END TEST skip_rpc_with_json 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.865 ************************************ 00:05:30.865 14:10:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.865 14:10:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.865 14:10:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.865 14:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.865 ************************************ 00:05:30.865 START TEST skip_rpc_with_delay 00:05:30.865 ************************************ 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.865 [2024-07-26 14:10:50.517447] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.865 [2024-07-26 14:10:50.517615] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.865 00:05:30.865 real 0m0.183s 00:05:30.865 user 0m0.109s 00:05:30.865 sys 0m0.073s 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.865 14:10:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.865 ************************************ 00:05:30.865 END TEST skip_rpc_with_delay 00:05:30.865 ************************************ 00:05:30.865 14:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:31.124 14:10:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:31.124 14:10:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:31.124 14:10:50 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.124 14:10:50 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.124 14:10:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.124 ************************************ 00:05:31.124 START TEST exit_on_failed_rpc_init 00:05:31.124 ************************************ 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62461 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62461 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62461 ']' 00:05:31.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.124 14:10:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.124 [2024-07-26 14:10:50.763620] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:31.124 [2024-07-26 14:10:50.764041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62461 ] 00:05:31.387 [2024-07-26 14:10:50.935863] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.387 [2024-07-26 14:10:51.085916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.966 14:10:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:32.224 [2024-07-26 14:10:51.811394] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:32.224 [2024-07-26 14:10:51.811571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62479 ] 00:05:32.224 [2024-07-26 14:10:51.979303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.483 [2024-07-26 14:10:52.177697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.483 [2024-07-26 14:10:52.177814] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.483 [2024-07-26 14:10:52.177845] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.483 [2024-07-26 14:10:52.177867] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62461 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62461 ']' 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62461 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62461 00:05:33.051 killing process with pid 62461 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62461' 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62461 00:05:33.051 14:10:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62461 00:05:34.960 ************************************ 00:05:34.960 END TEST exit_on_failed_rpc_init 00:05:34.960 00:05:34.960 real 0m3.660s 00:05:34.960 user 0m4.361s 00:05:34.960 sys 0m0.478s 00:05:34.960 14:10:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.960 14:10:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.960 ************************************ 00:05:34.960 14:10:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.960 00:05:34.960 real 0m20.665s 00:05:34.960 user 0m20.361s 00:05:34.960 sys 0m1.708s 00:05:34.960 ************************************ 00:05:34.960 END TEST skip_rpc 00:05:34.960 ************************************ 00:05:34.960 14:10:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.960 14:10:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.960 14:10:54 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.960 14:10:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.960 14:10:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.960 14:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.960 ************************************ 00:05:34.960 START TEST rpc_client 00:05:34.960 ************************************ 00:05:34.960 14:10:54 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.960 * Looking for test storage... 00:05:34.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.960 14:10:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.960 OK 00:05:34.960 14:10:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.960 ************************************ 00:05:34.960 END TEST rpc_client 00:05:34.960 ************************************ 00:05:34.960 00:05:34.960 real 0m0.140s 00:05:34.960 user 0m0.059s 00:05:34.960 sys 0m0.086s 00:05:34.960 14:10:54 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.960 14:10:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.960 14:10:54 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.960 14:10:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.960 14:10:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.960 14:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.960 ************************************ 00:05:34.960 START TEST json_config 00:05:34.960 ************************************ 00:05:34.960 14:10:54 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.960 14:10:54 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.960 14:10:54 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.960 14:10:54 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.960 14:10:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.960 14:10:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.960 14:10:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.960 14:10:54 json_config -- paths/export.sh@5 -- # export PATH 00:05:34.960 14:10:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@47 -- # : 0 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:34.960 14:10:54 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.960 14:10:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:34.961 WARNING: No tests are enabled so not running JSON configuration tests 00:05:34.961 14:10:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:34.961 00:05:34.961 real 0m0.076s 00:05:34.961 user 0m0.032s 00:05:34.961 sys 0m0.043s 00:05:34.961 14:10:54 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.961 14:10:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.961 ************************************ 00:05:34.961 END TEST json_config 00:05:34.961 ************************************ 00:05:34.961 14:10:54 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.961 14:10:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.961 14:10:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.961 14:10:54 -- common/autotest_common.sh@10 -- # set +x 00:05:34.961 ************************************ 00:05:34.961 START TEST json_config_extra_key 00:05:34.961 ************************************ 00:05:34.961 14:10:54 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.220 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=98a6780f-7c43-4db9-8e1a-dfa2b32a045c 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.220 14:10:54 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.220 14:10:54 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.220 14:10:54 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.220 14:10:54 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.220 14:10:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.220 14:10:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.220 14:10:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.220 14:10:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.221 14:10:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:35.221 14:10:54 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.221 INFO: launching applications... 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.221 14:10:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=62654 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.221 Waiting for target to run... 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 62654 /var/tmp/spdk_tgt.sock 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 62654 ']' 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:35.221 14:10:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.221 14:10:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.221 [2024-07-26 14:10:54.909854] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:35.221 [2024-07-26 14:10:54.910056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62654 ] 00:05:35.789 [2024-07-26 14:10:55.262864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.789 [2024-07-26 14:10:55.409004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.357 00:05:36.357 INFO: shutting down applications... 00:05:36.357 14:10:55 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:36.357 14:10:55 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.357 14:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.357 14:10:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 62654 ]] 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 62654 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62654 00:05:36.357 14:10:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.926 14:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.926 14:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.926 14:10:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62654 00:05:36.926 14:10:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.185 14:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.185 14:10:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.185 14:10:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62654 00:05:37.185 14:10:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.754 14:10:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.754 14:10:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.754 14:10:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62654 00:05:37.754 14:10:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 62654 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:38.322 SPDK target shutdown done 00:05:38.322 14:10:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:38.322 Success 00:05:38.322 14:10:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:38.322 00:05:38.322 real 0m3.223s 00:05:38.322 user 0m3.121s 00:05:38.322 sys 0m0.483s 00:05:38.322 ************************************ 00:05:38.322 END TEST json_config_extra_key 00:05:38.322 ************************************ 00:05:38.322 14:10:57 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.322 14:10:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:38.322 14:10:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.322 14:10:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.322 14:10:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.322 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.322 ************************************ 00:05:38.322 START TEST alias_rpc 00:05:38.322 ************************************ 00:05:38.322 14:10:57 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:38.322 * Looking for test storage... 00:05:38.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:38.322 14:10:58 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:38.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.322 14:10:58 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62739 00:05:38.322 14:10:58 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:38.322 14:10:58 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62739 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 62739 ']' 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.322 14:10:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.582 [2024-07-26 14:10:58.186623] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:38.582 [2024-07-26 14:10:58.186794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62739 ] 00:05:38.841 [2024-07-26 14:10:58.354368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.841 [2024-07-26 14:10:58.503744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.409 14:10:59 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.409 14:10:59 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.409 14:10:59 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:39.668 14:10:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62739 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 62739 ']' 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 62739 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62739 00:05:39.668 killing process with pid 62739 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62739' 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@969 -- # kill 62739 00:05:39.668 14:10:59 alias_rpc -- common/autotest_common.sh@974 -- # wait 62739 00:05:41.572 ************************************ 00:05:41.572 END TEST alias_rpc 00:05:41.572 ************************************ 00:05:41.572 00:05:41.572 real 0m3.174s 00:05:41.572 user 0m3.433s 00:05:41.572 sys 0m0.425s 00:05:41.572 14:11:01 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.572 14:11:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.572 14:11:01 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:41.572 14:11:01 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.572 14:11:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.572 14:11:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.572 14:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:41.572 ************************************ 00:05:41.572 START TEST spdkcli_tcp 00:05:41.572 ************************************ 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:41.572 * Looking for test storage... 00:05:41.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62827 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:41.572 14:11:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 62827 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 62827 ']' 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.572 14:11:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.831 [2024-07-26 14:11:01.419742] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:41.831 [2024-07-26 14:11:01.419947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62827 ] 00:05:41.831 [2024-07-26 14:11:01.590649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.090 [2024-07-26 14:11:01.743628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.090 [2024-07-26 14:11:01.743646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.658 14:11:02 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.658 14:11:02 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:42.658 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=62844 00:05:42.658 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:42.658 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:42.917 [ 00:05:42.917 "bdev_malloc_delete", 00:05:42.917 "bdev_malloc_create", 00:05:42.917 "bdev_null_resize", 00:05:42.917 "bdev_null_delete", 00:05:42.917 "bdev_null_create", 00:05:42.917 "bdev_nvme_cuse_unregister", 00:05:42.917 "bdev_nvme_cuse_register", 00:05:42.917 "bdev_opal_new_user", 00:05:42.917 "bdev_opal_set_lock_state", 00:05:42.917 "bdev_opal_delete", 00:05:42.917 "bdev_opal_get_info", 00:05:42.917 "bdev_opal_create", 00:05:42.917 "bdev_nvme_opal_revert", 00:05:42.917 "bdev_nvme_opal_init", 00:05:42.917 "bdev_nvme_send_cmd", 00:05:42.917 "bdev_nvme_get_path_iostat", 00:05:42.917 "bdev_nvme_get_mdns_discovery_info", 00:05:42.917 "bdev_nvme_stop_mdns_discovery", 00:05:42.917 "bdev_nvme_start_mdns_discovery", 00:05:42.917 "bdev_nvme_set_multipath_policy", 00:05:42.917 "bdev_nvme_set_preferred_path", 00:05:42.917 "bdev_nvme_get_io_paths", 00:05:42.917 "bdev_nvme_remove_error_injection", 00:05:42.917 "bdev_nvme_add_error_injection", 00:05:42.917 "bdev_nvme_get_discovery_info", 00:05:42.917 "bdev_nvme_stop_discovery", 00:05:42.917 "bdev_nvme_start_discovery", 00:05:42.917 "bdev_nvme_get_controller_health_info", 00:05:42.917 "bdev_nvme_disable_controller", 00:05:42.917 "bdev_nvme_enable_controller", 00:05:42.917 "bdev_nvme_reset_controller", 00:05:42.917 "bdev_nvme_get_transport_statistics", 00:05:42.917 "bdev_nvme_apply_firmware", 00:05:42.917 "bdev_nvme_detach_controller", 00:05:42.917 "bdev_nvme_get_controllers", 00:05:42.917 "bdev_nvme_attach_controller", 00:05:42.917 "bdev_nvme_set_hotplug", 00:05:42.917 "bdev_nvme_set_options", 00:05:42.917 "bdev_passthru_delete", 00:05:42.917 "bdev_passthru_create", 00:05:42.917 "bdev_lvol_set_parent_bdev", 00:05:42.917 "bdev_lvol_set_parent", 00:05:42.917 "bdev_lvol_check_shallow_copy", 00:05:42.917 "bdev_lvol_start_shallow_copy", 00:05:42.917 "bdev_lvol_grow_lvstore", 00:05:42.917 "bdev_lvol_get_lvols", 00:05:42.917 "bdev_lvol_get_lvstores", 00:05:42.917 "bdev_lvol_delete", 00:05:42.917 "bdev_lvol_set_read_only", 00:05:42.917 "bdev_lvol_resize", 00:05:42.917 "bdev_lvol_decouple_parent", 00:05:42.917 "bdev_lvol_inflate", 00:05:42.917 "bdev_lvol_rename", 00:05:42.917 "bdev_lvol_clone_bdev", 00:05:42.917 "bdev_lvol_clone", 00:05:42.917 "bdev_lvol_snapshot", 00:05:42.917 "bdev_lvol_create", 00:05:42.917 "bdev_lvol_delete_lvstore", 00:05:42.917 "bdev_lvol_rename_lvstore", 00:05:42.917 "bdev_lvol_create_lvstore", 00:05:42.917 "bdev_raid_set_options", 00:05:42.917 "bdev_raid_remove_base_bdev", 00:05:42.917 "bdev_raid_add_base_bdev", 00:05:42.917 "bdev_raid_delete", 00:05:42.917 "bdev_raid_create", 00:05:42.917 "bdev_raid_get_bdevs", 00:05:42.917 "bdev_error_inject_error", 00:05:42.917 "bdev_error_delete", 00:05:42.917 "bdev_error_create", 00:05:42.917 "bdev_split_delete", 00:05:42.917 "bdev_split_create", 00:05:42.917 "bdev_delay_delete", 00:05:42.917 "bdev_delay_create", 00:05:42.917 "bdev_delay_update_latency", 00:05:42.917 "bdev_zone_block_delete", 00:05:42.917 "bdev_zone_block_create", 00:05:42.917 "blobfs_create", 00:05:42.917 "blobfs_detect", 00:05:42.917 "blobfs_set_cache_size", 00:05:42.917 "bdev_xnvme_delete", 00:05:42.917 "bdev_xnvme_create", 00:05:42.917 "bdev_aio_delete", 00:05:42.917 "bdev_aio_rescan", 00:05:42.917 "bdev_aio_create", 00:05:42.917 "bdev_ftl_set_property", 00:05:42.917 "bdev_ftl_get_properties", 00:05:42.917 "bdev_ftl_get_stats", 00:05:42.917 "bdev_ftl_unmap", 00:05:42.917 "bdev_ftl_unload", 00:05:42.917 "bdev_ftl_delete", 00:05:42.917 "bdev_ftl_load", 00:05:42.917 "bdev_ftl_create", 00:05:42.917 "bdev_virtio_attach_controller", 00:05:42.917 "bdev_virtio_scsi_get_devices", 00:05:42.917 "bdev_virtio_detach_controller", 00:05:42.917 "bdev_virtio_blk_set_hotplug", 00:05:42.917 "bdev_iscsi_delete", 00:05:42.917 "bdev_iscsi_create", 00:05:42.917 "bdev_iscsi_set_options", 00:05:42.917 "accel_error_inject_error", 00:05:42.917 "ioat_scan_accel_module", 00:05:42.917 "dsa_scan_accel_module", 00:05:42.917 "iaa_scan_accel_module", 00:05:42.917 "keyring_file_remove_key", 00:05:42.917 "keyring_file_add_key", 00:05:42.917 "keyring_linux_set_options", 00:05:42.917 "iscsi_get_histogram", 00:05:42.917 "iscsi_enable_histogram", 00:05:42.917 "iscsi_set_options", 00:05:42.917 "iscsi_get_auth_groups", 00:05:42.917 "iscsi_auth_group_remove_secret", 00:05:42.917 "iscsi_auth_group_add_secret", 00:05:42.917 "iscsi_delete_auth_group", 00:05:42.917 "iscsi_create_auth_group", 00:05:42.917 "iscsi_set_discovery_auth", 00:05:42.917 "iscsi_get_options", 00:05:42.917 "iscsi_target_node_request_logout", 00:05:42.917 "iscsi_target_node_set_redirect", 00:05:42.917 "iscsi_target_node_set_auth", 00:05:42.917 "iscsi_target_node_add_lun", 00:05:42.917 "iscsi_get_stats", 00:05:42.917 "iscsi_get_connections", 00:05:42.917 "iscsi_portal_group_set_auth", 00:05:42.917 "iscsi_start_portal_group", 00:05:42.917 "iscsi_delete_portal_group", 00:05:42.917 "iscsi_create_portal_group", 00:05:42.917 "iscsi_get_portal_groups", 00:05:42.917 "iscsi_delete_target_node", 00:05:42.917 "iscsi_target_node_remove_pg_ig_maps", 00:05:42.917 "iscsi_target_node_add_pg_ig_maps", 00:05:42.917 "iscsi_create_target_node", 00:05:42.917 "iscsi_get_target_nodes", 00:05:42.918 "iscsi_delete_initiator_group", 00:05:42.918 "iscsi_initiator_group_remove_initiators", 00:05:42.918 "iscsi_initiator_group_add_initiators", 00:05:42.918 "iscsi_create_initiator_group", 00:05:42.918 "iscsi_get_initiator_groups", 00:05:42.918 "nvmf_set_crdt", 00:05:42.918 "nvmf_set_config", 00:05:42.918 "nvmf_set_max_subsystems", 00:05:42.918 "nvmf_stop_mdns_prr", 00:05:42.918 "nvmf_publish_mdns_prr", 00:05:42.918 "nvmf_subsystem_get_listeners", 00:05:42.918 "nvmf_subsystem_get_qpairs", 00:05:42.918 "nvmf_subsystem_get_controllers", 00:05:42.918 "nvmf_get_stats", 00:05:42.918 "nvmf_get_transports", 00:05:42.918 "nvmf_create_transport", 00:05:42.918 "nvmf_get_targets", 00:05:42.918 "nvmf_delete_target", 00:05:42.918 "nvmf_create_target", 00:05:42.918 "nvmf_subsystem_allow_any_host", 00:05:42.918 "nvmf_subsystem_remove_host", 00:05:42.918 "nvmf_subsystem_add_host", 00:05:42.918 "nvmf_ns_remove_host", 00:05:42.918 "nvmf_ns_add_host", 00:05:42.918 "nvmf_subsystem_remove_ns", 00:05:42.918 "nvmf_subsystem_add_ns", 00:05:42.918 "nvmf_subsystem_listener_set_ana_state", 00:05:42.918 "nvmf_discovery_get_referrals", 00:05:42.918 "nvmf_discovery_remove_referral", 00:05:42.918 "nvmf_discovery_add_referral", 00:05:42.918 "nvmf_subsystem_remove_listener", 00:05:42.918 "nvmf_subsystem_add_listener", 00:05:42.918 "nvmf_delete_subsystem", 00:05:42.918 "nvmf_create_subsystem", 00:05:42.918 "nvmf_get_subsystems", 00:05:42.918 "env_dpdk_get_mem_stats", 00:05:42.918 "nbd_get_disks", 00:05:42.918 "nbd_stop_disk", 00:05:42.918 "nbd_start_disk", 00:05:42.918 "ublk_recover_disk", 00:05:42.918 "ublk_get_disks", 00:05:42.918 "ublk_stop_disk", 00:05:42.918 "ublk_start_disk", 00:05:42.918 "ublk_destroy_target", 00:05:42.918 "ublk_create_target", 00:05:42.918 "virtio_blk_create_transport", 00:05:42.918 "virtio_blk_get_transports", 00:05:42.918 "vhost_controller_set_coalescing", 00:05:42.918 "vhost_get_controllers", 00:05:42.918 "vhost_delete_controller", 00:05:42.918 "vhost_create_blk_controller", 00:05:42.918 "vhost_scsi_controller_remove_target", 00:05:42.918 "vhost_scsi_controller_add_target", 00:05:42.918 "vhost_start_scsi_controller", 00:05:42.918 "vhost_create_scsi_controller", 00:05:42.918 "thread_set_cpumask", 00:05:42.918 "framework_get_governor", 00:05:42.918 "framework_get_scheduler", 00:05:42.918 "framework_set_scheduler", 00:05:42.918 "framework_get_reactors", 00:05:42.918 "thread_get_io_channels", 00:05:42.918 "thread_get_pollers", 00:05:42.918 "thread_get_stats", 00:05:42.918 "framework_monitor_context_switch", 00:05:42.918 "spdk_kill_instance", 00:05:42.918 "log_enable_timestamps", 00:05:42.918 "log_get_flags", 00:05:42.918 "log_clear_flag", 00:05:42.918 "log_set_flag", 00:05:42.918 "log_get_level", 00:05:42.918 "log_set_level", 00:05:42.918 "log_get_print_level", 00:05:42.918 "log_set_print_level", 00:05:42.918 "framework_enable_cpumask_locks", 00:05:42.918 "framework_disable_cpumask_locks", 00:05:42.918 "framework_wait_init", 00:05:42.918 "framework_start_init", 00:05:42.918 "scsi_get_devices", 00:05:42.918 "bdev_get_histogram", 00:05:42.918 "bdev_enable_histogram", 00:05:42.918 "bdev_set_qos_limit", 00:05:42.918 "bdev_set_qd_sampling_period", 00:05:42.918 "bdev_get_bdevs", 00:05:42.918 "bdev_reset_iostat", 00:05:42.918 "bdev_get_iostat", 00:05:42.918 "bdev_examine", 00:05:42.918 "bdev_wait_for_examine", 00:05:42.918 "bdev_set_options", 00:05:42.918 "notify_get_notifications", 00:05:42.918 "notify_get_types", 00:05:42.918 "accel_get_stats", 00:05:42.918 "accel_set_options", 00:05:42.918 "accel_set_driver", 00:05:42.918 "accel_crypto_key_destroy", 00:05:42.918 "accel_crypto_keys_get", 00:05:42.918 "accel_crypto_key_create", 00:05:42.918 "accel_assign_opc", 00:05:42.918 "accel_get_module_info", 00:05:42.918 "accel_get_opc_assignments", 00:05:42.918 "vmd_rescan", 00:05:42.918 "vmd_remove_device", 00:05:42.918 "vmd_enable", 00:05:42.918 "sock_get_default_impl", 00:05:42.918 "sock_set_default_impl", 00:05:42.918 "sock_impl_set_options", 00:05:42.918 "sock_impl_get_options", 00:05:42.918 "iobuf_get_stats", 00:05:42.918 "iobuf_set_options", 00:05:42.918 "framework_get_pci_devices", 00:05:42.918 "framework_get_config", 00:05:42.918 "framework_get_subsystems", 00:05:42.918 "trace_get_info", 00:05:42.918 "trace_get_tpoint_group_mask", 00:05:42.918 "trace_disable_tpoint_group", 00:05:42.918 "trace_enable_tpoint_group", 00:05:42.918 "trace_clear_tpoint_mask", 00:05:42.918 "trace_set_tpoint_mask", 00:05:42.918 "keyring_get_keys", 00:05:42.918 "spdk_get_version", 00:05:42.918 "rpc_get_methods" 00:05:42.918 ] 00:05:42.918 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:42.918 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:42.918 14:11:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 62827 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 62827 ']' 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 62827 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.918 14:11:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62827 00:05:43.177 killing process with pid 62827 00:05:43.177 14:11:02 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.177 14:11:02 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.177 14:11:02 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62827' 00:05:43.177 14:11:02 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 62827 00:05:43.177 14:11:02 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 62827 00:05:45.081 00:05:45.081 real 0m3.279s 00:05:45.081 user 0m5.778s 00:05:45.081 sys 0m0.501s 00:05:45.081 14:11:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.081 14:11:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 ************************************ 00:05:45.081 END TEST spdkcli_tcp 00:05:45.081 ************************************ 00:05:45.081 14:11:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.081 14:11:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.081 14:11:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.081 14:11:04 -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 ************************************ 00:05:45.081 START TEST dpdk_mem_utility 00:05:45.081 ************************************ 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:45.081 * Looking for test storage... 00:05:45.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:45.081 14:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:45.081 14:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62941 00:05:45.081 14:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62941 00:05:45.081 14:11:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 62941 ']' 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.081 14:11:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.081 [2024-07-26 14:11:04.746711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:45.081 [2024-07-26 14:11:04.746917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62941 ] 00:05:45.340 [2024-07-26 14:11:04.918687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.340 [2024-07-26 14:11:05.072773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.280 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.280 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:46.280 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:46.280 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:46.280 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.280 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:46.280 { 00:05:46.280 "filename": "/tmp/spdk_mem_dump.txt" 00:05:46.280 } 00:05:46.280 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:46.280 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:46.280 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:46.280 1 heaps totaling size 820.000000 MiB 00:05:46.280 size: 820.000000 MiB heap id: 0 00:05:46.280 end heaps---------- 00:05:46.280 8 mempools totaling size 598.116089 MiB 00:05:46.280 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:46.280 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:46.280 size: 84.521057 MiB name: bdev_io_62941 00:05:46.280 size: 51.011292 MiB name: evtpool_62941 00:05:46.280 size: 50.003479 MiB name: msgpool_62941 00:05:46.280 size: 21.763794 MiB name: PDU_Pool 00:05:46.280 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:46.280 size: 0.026123 MiB name: Session_Pool 00:05:46.280 end mempools------- 00:05:46.280 6 memzones totaling size 4.142822 MiB 00:05:46.280 size: 1.000366 MiB name: RG_ring_0_62941 00:05:46.280 size: 1.000366 MiB name: RG_ring_1_62941 00:05:46.280 size: 1.000366 MiB name: RG_ring_4_62941 00:05:46.280 size: 1.000366 MiB name: RG_ring_5_62941 00:05:46.280 size: 0.125366 MiB name: RG_ring_2_62941 00:05:46.280 size: 0.015991 MiB name: RG_ring_3_62941 00:05:46.280 end memzones------- 00:05:46.280 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:46.280 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:05:46.280 list of free elements. size: 18.451538 MiB 00:05:46.280 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:46.280 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:46.280 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:46.280 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:46.280 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:46.280 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:46.280 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:46.280 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:46.280 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:46.280 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:46.280 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:46.280 element at address: 0x200000200000 with size: 0.829956 MiB 00:05:46.280 element at address: 0x20001b000000 with size: 0.564392 MiB 00:05:46.280 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:46.280 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:46.280 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:46.280 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:46.280 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:46.280 list of standard malloc elements. size: 199.284058 MiB 00:05:46.280 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:46.280 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:46.280 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:46.280 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:46.280 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:46.280 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:46.280 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:46.280 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:46.280 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:46.280 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:46.280 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:46.281 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:46.281 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:46.281 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:46.282 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:46.282 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:46.282 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:46.282 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:46.282 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:46.283 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:46.283 list of memzone associated elements. size: 602.264404 MiB 00:05:46.283 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:46.283 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:46.283 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:46.283 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:46.283 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:46.283 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62941_0 00:05:46.283 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:46.283 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62941_0 00:05:46.283 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:46.283 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62941_0 00:05:46.283 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:46.283 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:46.283 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:46.283 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:46.283 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:46.283 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62941 00:05:46.283 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:46.283 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62941 00:05:46.283 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:46.283 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62941 00:05:46.283 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:46.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:46.283 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:46.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:46.283 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:46.283 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:46.283 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:46.283 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:46.283 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:46.283 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62941 00:05:46.283 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:46.283 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62941 00:05:46.283 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:46.283 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62941 00:05:46.283 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:46.283 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62941 00:05:46.283 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:46.283 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62941 00:05:46.283 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:46.283 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:46.283 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:46.283 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:46.283 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:46.283 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:46.283 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:46.283 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62941 00:05:46.283 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:46.283 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:46.283 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:46.283 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:46.283 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:46.283 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62941 00:05:46.283 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:46.283 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:46.283 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:46.283 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62941 00:05:46.283 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:46.283 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62941 00:05:46.283 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:46.283 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:46.283 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:46.283 14:11:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62941 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 62941 ']' 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 62941 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62941 00:05:46.283 killing process with pid 62941 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62941' 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 62941 00:05:46.283 14:11:05 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 62941 00:05:48.218 00:05:48.218 real 0m3.128s 00:05:48.218 user 0m3.226s 00:05:48.218 sys 0m0.440s 00:05:48.218 ************************************ 00:05:48.218 END TEST dpdk_mem_utility 00:05:48.218 ************************************ 00:05:48.218 14:11:07 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.218 14:11:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.218 14:11:07 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.218 14:11:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.218 14:11:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.218 14:11:07 -- common/autotest_common.sh@10 -- # set +x 00:05:48.218 ************************************ 00:05:48.218 START TEST event 00:05:48.218 ************************************ 00:05:48.218 14:11:07 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:48.218 * Looking for test storage... 00:05:48.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:48.218 14:11:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.218 14:11:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.218 14:11:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.218 14:11:07 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:48.218 14:11:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.218 14:11:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.218 ************************************ 00:05:48.218 START TEST event_perf 00:05:48.218 ************************************ 00:05:48.218 14:11:07 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:48.218 Running I/O for 1 seconds...[2024-07-26 14:11:07.849997] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.218 [2024-07-26 14:11:07.850264] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:05:48.477 [2024-07-26 14:11:08.007355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.477 [2024-07-26 14:11:08.159187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.477 [2024-07-26 14:11:08.159283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.477 Running I/O for 1 seconds...[2024-07-26 14:11:08.159400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.477 [2024-07-26 14:11:08.159415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.853 00:05:49.853 lcore 0: 198626 00:05:49.853 lcore 1: 198625 00:05:49.853 lcore 2: 198626 00:05:49.853 lcore 3: 198626 00:05:49.853 done. 00:05:49.853 00:05:49.853 real 0m1.708s 00:05:49.853 ************************************ 00:05:49.853 END TEST event_perf 00:05:49.853 ************************************ 00:05:49.853 user 0m4.481s 00:05:49.853 sys 0m0.103s 00:05:49.853 14:11:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.853 14:11:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.853 14:11:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:49.853 14:11:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:49.853 14:11:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.853 14:11:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.853 ************************************ 00:05:49.853 START TEST event_reactor 00:05:49.853 ************************************ 00:05:49.853 14:11:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:50.112 [2024-07-26 14:11:09.616438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:50.112 [2024-07-26 14:11:09.616598] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63070 ] 00:05:50.112 [2024-07-26 14:11:09.779818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.371 [2024-07-26 14:11:09.936409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.748 test_start 00:05:51.748 oneshot 00:05:51.748 tick 100 00:05:51.748 tick 100 00:05:51.748 tick 250 00:05:51.748 tick 100 00:05:51.748 tick 100 00:05:51.748 tick 100 00:05:51.748 tick 250 00:05:51.748 tick 500 00:05:51.748 tick 100 00:05:51.748 tick 100 00:05:51.748 tick 250 00:05:51.748 tick 100 00:05:51.748 tick 100 00:05:51.748 test_end 00:05:51.748 ************************************ 00:05:51.748 END TEST event_reactor 00:05:51.748 ************************************ 00:05:51.748 00:05:51.748 real 0m1.691s 00:05:51.748 user 0m1.490s 00:05:51.748 sys 0m0.093s 00:05:51.748 14:11:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.748 14:11:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 14:11:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.748 14:11:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:51.748 14:11:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.748 14:11:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.748 ************************************ 00:05:51.748 START TEST event_reactor_perf 00:05:51.748 ************************************ 00:05:51.748 14:11:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:51.748 [2024-07-26 14:11:11.365108] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:51.748 [2024-07-26 14:11:11.365269] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63106 ] 00:05:52.007 [2024-07-26 14:11:11.534245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.007 [2024-07-26 14:11:11.700719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.384 test_start 00:05:53.384 test_end 00:05:53.384 Performance: 326952 events per second 00:05:53.384 ************************************ 00:05:53.384 END TEST event_reactor_perf 00:05:53.384 ************************************ 00:05:53.384 00:05:53.384 real 0m1.718s 00:05:53.384 user 0m1.505s 00:05:53.384 sys 0m0.103s 00:05:53.384 14:11:13 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.384 14:11:13 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.384 14:11:13 event -- event/event.sh@49 -- # uname -s 00:05:53.384 14:11:13 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:53.384 14:11:13 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.384 14:11:13 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.384 14:11:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.384 14:11:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.384 ************************************ 00:05:53.384 START TEST event_scheduler 00:05:53.384 ************************************ 00:05:53.384 14:11:13 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:53.643 * Looking for test storage... 00:05:53.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:53.643 14:11:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:53.643 14:11:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63174 00:05:53.643 14:11:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.643 14:11:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:53.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.643 14:11:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63174 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63174 ']' 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.643 14:11:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.643 [2024-07-26 14:11:13.273912] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:53.643 [2024-07-26 14:11:13.274306] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63174 ] 00:05:53.902 [2024-07-26 14:11:13.453539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.161 [2024-07-26 14:11:13.688754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.161 [2024-07-26 14:11:13.688958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.161 [2024-07-26 14:11:13.689071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.161 [2024-07-26 14:11:13.689093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:54.730 14:11:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.730 POWER: Cannot set governor of lcore 0 to performance 00:05:54.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.730 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:54.730 POWER: Cannot set governor of lcore 0 to userspace 00:05:54.730 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:54.730 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:54.730 POWER: Unable to set Power Management Environment for lcore 0 00:05:54.730 [2024-07-26 14:11:14.209698] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:54.730 [2024-07-26 14:11:14.209927] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:54.730 [2024-07-26 14:11:14.210075] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:54.730 [2024-07-26 14:11:14.210348] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:54.730 [2024-07-26 14:11:14.210630] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:54.730 [2024-07-26 14:11:14.210830] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.730 14:11:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 [2024-07-26 14:11:14.450509] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.730 14:11:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 ************************************ 00:05:54.730 START TEST scheduler_create_thread 00:05:54.730 ************************************ 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 2 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.730 3 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.730 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.989 4 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 5 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 6 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 7 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 8 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 9 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 10 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.990 14:11:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.926 14:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.926 14:11:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:55.926 14:11:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:55.926 14:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.926 14:11:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.863 ************************************ 00:05:56.863 END TEST scheduler_create_thread 00:05:56.863 ************************************ 00:05:56.863 14:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.863 00:05:56.863 real 0m2.140s 00:05:56.863 user 0m0.019s 00:05:56.863 sys 0m0.005s 00:05:56.863 14:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.863 14:11:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.123 14:11:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.123 14:11:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63174 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63174 ']' 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63174 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63174 00:05:57.123 killing process with pid 63174 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63174' 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63174 00:05:57.123 14:11:16 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63174 00:05:57.382 [2024-07-26 14:11:17.082862] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.760 ************************************ 00:05:58.760 END TEST event_scheduler 00:05:58.760 ************************************ 00:05:58.760 00:05:58.760 real 0m5.006s 00:05:58.760 user 0m8.317s 00:05:58.760 sys 0m0.395s 00:05:58.760 14:11:18 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.760 14:11:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 14:11:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.760 14:11:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.760 14:11:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.760 14:11:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.760 14:11:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 ************************************ 00:05:58.760 START TEST app_repeat 00:05:58.760 ************************************ 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63280 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63280' 00:05:58.760 Process app_repeat pid: 63280 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.760 spdk_app_start Round 0 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.760 14:11:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63280 /var/tmp/spdk-nbd.sock 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63280 ']' 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:58.760 14:11:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.760 [2024-07-26 14:11:18.222857] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:58.760 [2024-07-26 14:11:18.223116] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63280 ] 00:05:58.760 [2024-07-26 14:11:18.392839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.019 [2024-07-26 14:11:18.564990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.019 [2024-07-26 14:11:18.564995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.588 14:11:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.588 14:11:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:59.588 14:11:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.847 Malloc0 00:05:59.847 14:11:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.106 Malloc1 00:06:00.106 14:11:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.106 14:11:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.366 /dev/nbd0 00:06:00.366 14:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.366 14:11:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.366 1+0 records in 00:06:00.366 1+0 records out 00:06:00.366 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158103 s, 2.6 MB/s 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.366 14:11:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.366 14:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.366 14:11:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.366 14:11:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.625 /dev/nbd1 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.625 1+0 records in 00:06:00.625 1+0 records out 00:06:00.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281633 s, 14.5 MB/s 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.625 14:11:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.625 14:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.886 { 00:06:00.886 "nbd_device": "/dev/nbd0", 00:06:00.886 "bdev_name": "Malloc0" 00:06:00.886 }, 00:06:00.886 { 00:06:00.886 "nbd_device": "/dev/nbd1", 00:06:00.886 "bdev_name": "Malloc1" 00:06:00.886 } 00:06:00.886 ]' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.886 { 00:06:00.886 "nbd_device": "/dev/nbd0", 00:06:00.886 "bdev_name": "Malloc0" 00:06:00.886 }, 00:06:00.886 { 00:06:00.886 "nbd_device": "/dev/nbd1", 00:06:00.886 "bdev_name": "Malloc1" 00:06:00.886 } 00:06:00.886 ]' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.886 /dev/nbd1' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.886 /dev/nbd1' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.886 256+0 records in 00:06:00.886 256+0 records out 00:06:00.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00462464 s, 227 MB/s 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.886 256+0 records in 00:06:00.886 256+0 records out 00:06:00.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030167 s, 34.8 MB/s 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.886 256+0 records in 00:06:00.886 256+0 records out 00:06:00.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027948 s, 37.5 MB/s 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.886 14:11:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.145 14:11:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.405 14:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.664 14:11:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.664 14:11:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.233 14:11:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.170 [2024-07-26 14:11:22.745313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.170 [2024-07-26 14:11:22.893332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.170 [2024-07-26 14:11:22.893342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.430 [2024-07-26 14:11:23.032206] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.430 [2024-07-26 14:11:23.032328] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.332 spdk_app_start Round 1 00:06:05.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.332 14:11:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.332 14:11:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.332 14:11:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63280 /var/tmp/spdk-nbd.sock 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63280 ']' 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.332 14:11:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.332 14:11:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.332 14:11:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.332 14:11:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.591 Malloc0 00:06:05.591 14:11:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.851 Malloc1 00:06:06.112 14:11:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.112 14:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.371 /dev/nbd0 00:06:06.371 14:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.371 14:11:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.371 1+0 records in 00:06:06.371 1+0 records out 00:06:06.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291528 s, 14.1 MB/s 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.371 14:11:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.371 14:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.371 14:11:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.371 14:11:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.371 /dev/nbd1 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.631 1+0 records in 00:06:06.631 1+0 records out 00:06:06.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407639 s, 10.0 MB/s 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.631 14:11:26 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.631 14:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.891 { 00:06:06.891 "nbd_device": "/dev/nbd0", 00:06:06.891 "bdev_name": "Malloc0" 00:06:06.891 }, 00:06:06.891 { 00:06:06.891 "nbd_device": "/dev/nbd1", 00:06:06.891 "bdev_name": "Malloc1" 00:06:06.891 } 00:06:06.891 ]' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.891 { 00:06:06.891 "nbd_device": "/dev/nbd0", 00:06:06.891 "bdev_name": "Malloc0" 00:06:06.891 }, 00:06:06.891 { 00:06:06.891 "nbd_device": "/dev/nbd1", 00:06:06.891 "bdev_name": "Malloc1" 00:06:06.891 } 00:06:06.891 ]' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.891 /dev/nbd1' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.891 /dev/nbd1' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.891 256+0 records in 00:06:06.891 256+0 records out 00:06:06.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642753 s, 163 MB/s 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.891 256+0 records in 00:06:06.891 256+0 records out 00:06:06.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243661 s, 43.0 MB/s 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.891 256+0 records in 00:06:06.891 256+0 records out 00:06:06.891 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304066 s, 34.5 MB/s 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.891 14:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.150 14:11:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.409 14:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.669 14:11:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.669 14:11:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.243 14:11:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.216 [2024-07-26 14:11:28.741264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.216 [2024-07-26 14:11:28.884497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.216 [2024-07-26 14:11:28.884499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.475 [2024-07-26 14:11:29.031600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.475 [2024-07-26 14:11:29.031669] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.380 spdk_app_start Round 2 00:06:11.380 14:11:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.380 14:11:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.380 14:11:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63280 /var/tmp/spdk-nbd.sock 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63280 ']' 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.380 14:11:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.380 14:11:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.380 14:11:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:11.380 14:11:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.639 Malloc0 00:06:11.639 14:11:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.898 Malloc1 00:06:11.898 14:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.898 14:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.157 /dev/nbd0 00:06:12.157 14:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.157 14:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.157 1+0 records in 00:06:12.157 1+0 records out 00:06:12.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171749 s, 23.8 MB/s 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.157 14:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.157 14:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.157 14:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.157 14:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.415 /dev/nbd1 00:06:12.415 14:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.416 1+0 records in 00:06:12.416 1+0 records out 00:06:12.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293705 s, 13.9 MB/s 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.416 14:11:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.416 14:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.674 { 00:06:12.674 "nbd_device": "/dev/nbd0", 00:06:12.674 "bdev_name": "Malloc0" 00:06:12.674 }, 00:06:12.674 { 00:06:12.674 "nbd_device": "/dev/nbd1", 00:06:12.674 "bdev_name": "Malloc1" 00:06:12.674 } 00:06:12.674 ]' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.674 { 00:06:12.674 "nbd_device": "/dev/nbd0", 00:06:12.674 "bdev_name": "Malloc0" 00:06:12.674 }, 00:06:12.674 { 00:06:12.674 "nbd_device": "/dev/nbd1", 00:06:12.674 "bdev_name": "Malloc1" 00:06:12.674 } 00:06:12.674 ]' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.674 /dev/nbd1' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.674 /dev/nbd1' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.674 14:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.675 256+0 records in 00:06:12.675 256+0 records out 00:06:12.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0089859 s, 117 MB/s 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.675 14:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.934 256+0 records in 00:06:12.934 256+0 records out 00:06:12.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273544 s, 38.3 MB/s 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.934 256+0 records in 00:06:12.934 256+0 records out 00:06:12.934 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028383 s, 36.9 MB/s 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.934 14:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.192 14:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.192 14:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.192 14:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.192 14:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.192 14:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.193 14:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.193 14:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.193 14:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.193 14:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.193 14:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.451 14:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.451 14:11:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.451 14:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.451 14:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.710 14:11:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.710 14:11:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.968 14:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.905 [2024-07-26 14:11:34.539000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.164 [2024-07-26 14:11:34.688746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.164 [2024-07-26 14:11:34.688774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.164 [2024-07-26 14:11:34.828444] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.164 [2024-07-26 14:11:34.828525] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.069 14:11:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63280 /var/tmp/spdk-nbd.sock 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63280 ']' 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.069 14:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:17.328 14:11:36 event.app_repeat -- event/event.sh@39 -- # killprocess 63280 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63280 ']' 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63280 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63280 00:06:17.328 killing process with pid 63280 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63280' 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63280 00:06:17.328 14:11:36 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63280 00:06:18.265 spdk_app_start is called in Round 0. 00:06:18.265 Shutdown signal received, stop current app iteration 00:06:18.265 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:18.265 spdk_app_start is called in Round 1. 00:06:18.265 Shutdown signal received, stop current app iteration 00:06:18.265 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:18.265 spdk_app_start is called in Round 2. 00:06:18.265 Shutdown signal received, stop current app iteration 00:06:18.265 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:18.265 spdk_app_start is called in Round 3. 00:06:18.265 Shutdown signal received, stop current app iteration 00:06:18.265 14:11:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.265 14:11:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.265 00:06:18.265 real 0m19.615s 00:06:18.265 user 0m42.505s 00:06:18.265 sys 0m2.385s 00:06:18.265 ************************************ 00:06:18.265 END TEST app_repeat 00:06:18.265 ************************************ 00:06:18.265 14:11:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.265 14:11:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.265 14:11:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.265 14:11:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.265 14:11:37 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.265 14:11:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.265 14:11:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.265 ************************************ 00:06:18.265 START TEST cpu_locks 00:06:18.265 ************************************ 00:06:18.265 14:11:37 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.265 * Looking for test storage... 00:06:18.265 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.265 14:11:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.265 14:11:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.265 14:11:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.265 14:11:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.265 14:11:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.265 14:11:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.265 14:11:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.265 ************************************ 00:06:18.265 START TEST default_locks 00:06:18.265 ************************************ 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63714 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 63714 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63714 ']' 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.265 14:11:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.524 [2024-07-26 14:11:38.042273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:18.524 [2024-07-26 14:11:38.042448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63714 ] 00:06:18.524 [2024-07-26 14:11:38.209969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.783 [2024-07-26 14:11:38.357005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.351 14:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.351 14:11:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:19.351 14:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 63714 00:06:19.351 14:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 63714 00:06:19.351 14:11:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 63714 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 63714 ']' 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 63714 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63714 00:06:19.919 killing process with pid 63714 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63714' 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 63714 00:06:19.919 14:11:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 63714 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63714 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63714 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 63714 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 63714 ']' 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.823 ERROR: process (pid: 63714) is no longer running 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63714) - No such process 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.823 00:06:21.823 real 0m3.232s 00:06:21.823 user 0m3.269s 00:06:21.823 sys 0m0.564s 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.823 14:11:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 ************************************ 00:06:21.823 END TEST default_locks 00:06:21.823 ************************************ 00:06:21.823 14:11:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.823 14:11:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.823 14:11:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.823 14:11:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 ************************************ 00:06:21.823 START TEST default_locks_via_rpc 00:06:21.823 ************************************ 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63785 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 63785 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63785 ']' 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.823 14:11:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.823 [2024-07-26 14:11:41.323167] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.823 [2024-07-26 14:11:41.323824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:06:21.823 [2024-07-26 14:11:41.493678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.083 [2024-07-26 14:11:41.642123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 63785 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 63785 00:06:22.651 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 63785 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 63785 ']' 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 63785 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63785 00:06:22.917 killing process with pid 63785 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63785' 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 63785 00:06:22.917 14:11:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 63785 00:06:24.831 ************************************ 00:06:24.831 END TEST default_locks_via_rpc 00:06:24.831 ************************************ 00:06:24.831 00:06:24.831 real 0m3.143s 00:06:24.831 user 0m3.227s 00:06:24.831 sys 0m0.573s 00:06:24.831 14:11:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.831 14:11:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.831 14:11:44 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:24.831 14:11:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.831 14:11:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.831 14:11:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.831 ************************************ 00:06:24.831 START TEST non_locking_app_on_locked_coremask 00:06:24.831 ************************************ 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63854 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63854 /var/tmp/spdk.sock 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63854 ']' 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.831 14:11:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.831 [2024-07-26 14:11:44.514406] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.831 [2024-07-26 14:11:44.514583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63854 ] 00:06:25.089 [2024-07-26 14:11:44.681967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.090 [2024-07-26 14:11:44.828888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63870 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63870 /var/tmp/spdk2.sock 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63870 ']' 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.026 14:11:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.026 [2024-07-26 14:11:45.515547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.026 [2024-07-26 14:11:45.516003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63870 ] 00:06:26.026 [2024-07-26 14:11:45.680693] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:26.026 [2024-07-26 14:11:45.680760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.284 [2024-07-26 14:11:45.978425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.661 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.661 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:27.661 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63854 00:06:27.661 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63854 00:06:27.661 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63854 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63854 ']' 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63854 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63854 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.228 killing process with pid 63854 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63854' 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63854 00:06:28.228 14:11:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63854 00:06:32.419 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63870 00:06:32.419 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63870 ']' 00:06:32.419 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63870 00:06:32.419 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63870 00:06:32.420 killing process with pid 63870 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63870' 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63870 00:06:32.420 14:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63870 00:06:33.797 00:06:33.797 real 0m8.771s 00:06:33.797 user 0m9.159s 00:06:33.797 sys 0m1.123s 00:06:33.797 ************************************ 00:06:33.797 END TEST non_locking_app_on_locked_coremask 00:06:33.797 ************************************ 00:06:33.797 14:11:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.797 14:11:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.797 14:11:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:33.797 14:11:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.797 14:11:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.797 14:11:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.797 ************************************ 00:06:33.797 START TEST locking_app_on_unlocked_coremask 00:06:33.797 ************************************ 00:06:33.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63986 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63986 /var/tmp/spdk.sock 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63986 ']' 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.797 14:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.797 [2024-07-26 14:11:53.348527] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.797 [2024-07-26 14:11:53.349009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63986 ] 00:06:33.797 [2024-07-26 14:11:53.518410] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.797 [2024-07-26 14:11:53.518650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.056 [2024-07-26 14:11:53.677077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64002 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64002 /var/tmp/spdk2.sock 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64002 ']' 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:34.624 14:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.624 [2024-07-26 14:11:54.378055] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:34.624 [2024-07-26 14:11:54.378218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64002 ] 00:06:34.883 [2024-07-26 14:11:54.547577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.142 [2024-07-26 14:11:54.862659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.520 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.520 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:36.520 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64002 00:06:36.520 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64002 00:06:36.520 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63986 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63986 ']' 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63986 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63986 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:37.458 killing process with pid 63986 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63986' 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63986 00:06:37.458 14:11:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63986 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64002 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64002 ']' 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64002 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64002 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.771 killing process with pid 64002 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64002' 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64002 00:06:40.771 14:12:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64002 00:06:42.676 00:06:42.676 real 0m8.994s 00:06:42.676 user 0m9.414s 00:06:42.676 sys 0m1.133s 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:42.676 ************************************ 00:06:42.676 END TEST locking_app_on_unlocked_coremask 00:06:42.676 ************************************ 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.676 14:12:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.676 14:12:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.676 14:12:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.676 14:12:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.676 ************************************ 00:06:42.676 START TEST locking_app_on_locked_coremask 00:06:42.676 ************************************ 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64126 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64126 /var/tmp/spdk.sock 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64126 ']' 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.676 14:12:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.676 [2024-07-26 14:12:02.390693] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:42.676 [2024-07-26 14:12:02.390885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:06:42.935 [2024-07-26 14:12:02.563476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.193 [2024-07-26 14:12:02.726480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64142 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64142 /var/tmp/spdk2.sock 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64142 /var/tmp/spdk2.sock 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64142 /var/tmp/spdk2.sock 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64142 ']' 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.760 14:12:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.760 [2024-07-26 14:12:03.426794] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:43.760 [2024-07-26 14:12:03.426966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64142 ] 00:06:44.019 [2024-07-26 14:12:03.591344] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64126 has claimed it. 00:06:44.019 [2024-07-26 14:12:03.591495] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.586 ERROR: process (pid: 64142) is no longer running 00:06:44.586 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64142) - No such process 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64126 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64126 00:06:44.586 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64126 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64126 ']' 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64126 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64126 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.846 killing process with pid 64126 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64126' 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64126 00:06:44.846 14:12:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64126 00:06:46.752 00:06:46.752 real 0m3.919s 00:06:46.752 user 0m4.365s 00:06:46.752 sys 0m0.617s 00:06:46.752 ************************************ 00:06:46.752 14:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.752 14:12:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.752 END TEST locking_app_on_locked_coremask 00:06:46.752 ************************************ 00:06:46.752 14:12:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:46.752 14:12:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.752 14:12:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.752 14:12:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.752 ************************************ 00:06:46.752 START TEST locking_overlapped_coremask 00:06:46.752 ************************************ 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64201 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64201 /var/tmp/spdk.sock 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64201 ']' 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.752 14:12:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.752 [2024-07-26 14:12:06.367533] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:46.752 [2024-07-26 14:12:06.367717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64201 ] 00:06:47.012 [2024-07-26 14:12:06.536353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.012 [2024-07-26 14:12:06.693743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.012 [2024-07-26 14:12:06.693872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.012 [2024-07-26 14:12:06.693878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64219 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64219 /var/tmp/spdk2.sock 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64219 /var/tmp/spdk2.sock 00:06:47.579 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:47.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64219 /var/tmp/spdk2.sock 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64219 ']' 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.580 14:12:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.839 [2024-07-26 14:12:07.433766] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:47.839 [2024-07-26 14:12:07.433991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64219 ] 00:06:48.098 [2024-07-26 14:12:07.617105] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64201 has claimed it. 00:06:48.098 [2024-07-26 14:12:07.617242] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.358 ERROR: process (pid: 64219) is no longer running 00:06:48.358 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64219) - No such process 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64201 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64201 ']' 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64201 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64201 00:06:48.358 killing process with pid 64201 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64201' 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64201 00:06:48.358 14:12:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64201 00:06:50.263 00:06:50.263 real 0m3.676s 00:06:50.263 user 0m9.693s 00:06:50.263 sys 0m0.523s 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.263 ************************************ 00:06:50.263 END TEST locking_overlapped_coremask 00:06:50.263 ************************************ 00:06:50.263 14:12:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.263 14:12:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.263 14:12:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.263 14:12:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.263 ************************************ 00:06:50.263 START TEST locking_overlapped_coremask_via_rpc 00:06:50.263 ************************************ 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64284 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64284 /var/tmp/spdk.sock 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64284 ']' 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.263 14:12:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.526 [2024-07-26 14:12:10.076964] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:50.526 [2024-07-26 14:12:10.077133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64284 ] 00:06:50.526 [2024-07-26 14:12:10.233337] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.526 [2024-07-26 14:12:10.233402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.784 [2024-07-26 14:12:10.385386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.784 [2024-07-26 14:12:10.385515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.784 [2024-07-26 14:12:10.385545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64302 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64302 /var/tmp/spdk2.sock 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64302 ']' 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.352 14:12:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.611 [2024-07-26 14:12:11.138037] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:51.611 [2024-07-26 14:12:11.138219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64302 ] 00:06:51.611 [2024-07-26 14:12:11.313861] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.611 [2024-07-26 14:12:11.313936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.178 [2024-07-26 14:12:11.648356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.178 [2024-07-26 14:12:11.648439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.178 [2024-07-26 14:12:11.648466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.557 [2024-07-26 14:12:12.962179] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64284 has claimed it. 00:06:53.557 request: 00:06:53.557 { 00:06:53.557 "method": "framework_enable_cpumask_locks", 00:06:53.557 "req_id": 1 00:06:53.557 } 00:06:53.557 Got JSON-RPC error response 00:06:53.557 response: 00:06:53.557 { 00:06:53.557 "code": -32603, 00:06:53.557 "message": "Failed to claim CPU core: 2" 00:06:53.557 } 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64284 /var/tmp/spdk.sock 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64284 ']' 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.557 14:12:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64302 /var/tmp/spdk2.sock 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64302 ']' 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.557 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.816 00:06:53.816 real 0m3.513s 00:06:53.816 user 0m1.292s 00:06:53.816 sys 0m0.187s 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.816 ************************************ 00:06:53.816 END TEST locking_overlapped_coremask_via_rpc 00:06:53.816 14:12:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.816 ************************************ 00:06:53.816 14:12:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.816 14:12:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64284 ]] 00:06:53.816 14:12:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64284 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64284 ']' 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64284 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64284 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.816 killing process with pid 64284 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64284' 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64284 00:06:53.816 14:12:13 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64284 00:06:56.351 14:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64302 ]] 00:06:56.351 14:12:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64302 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64302 ']' 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64302 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64302 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:56.351 killing process with pid 64302 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64302' 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64302 00:06:56.351 14:12:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64302 00:06:57.728 14:12:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64284 ]] 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64284 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64284 ']' 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64284 00:06:57.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64284) - No such process 00:06:57.729 Process with pid 64284 is not found 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64284 is not found' 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64302 ]] 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64302 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64302 ']' 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64302 00:06:57.729 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64302) - No such process 00:06:57.729 Process with pid 64302 is not found 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64302 is not found' 00:06:57.729 14:12:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:57.729 00:06:57.729 real 0m39.537s 00:06:57.729 user 1m7.815s 00:06:57.729 sys 0m5.610s 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.729 14:12:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.729 ************************************ 00:06:57.729 END TEST cpu_locks 00:06:57.729 ************************************ 00:06:57.729 00:06:57.729 real 1m9.683s 00:06:57.729 user 2m6.232s 00:06:57.729 sys 0m8.930s 00:06:57.729 14:12:17 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.729 14:12:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.729 ************************************ 00:06:57.729 END TEST event 00:06:57.729 ************************************ 00:06:57.729 14:12:17 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.729 14:12:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.729 14:12:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.729 14:12:17 -- common/autotest_common.sh@10 -- # set +x 00:06:57.729 ************************************ 00:06:57.729 START TEST thread 00:06:57.729 ************************************ 00:06:57.729 14:12:17 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:57.988 * Looking for test storage... 00:06:57.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:57.988 14:12:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.988 14:12:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:57.988 14:12:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.988 14:12:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.988 ************************************ 00:06:57.988 START TEST thread_poller_perf 00:06:57.988 ************************************ 00:06:57.988 14:12:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:57.988 [2024-07-26 14:12:17.589599] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:57.988 [2024-07-26 14:12:17.589779] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64465 ] 00:06:58.247 [2024-07-26 14:12:17.762885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.247 [2024-07-26 14:12:17.988426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.247 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.624 ====================================== 00:06:59.624 busy:2212630209 (cyc) 00:06:59.624 total_run_count: 365000 00:06:59.624 tsc_hz: 2200000000 (cyc) 00:06:59.624 ====================================== 00:06:59.624 poller_cost: 6062 (cyc), 2755 (nsec) 00:06:59.624 00:06:59.624 real 0m1.781s 00:06:59.624 user 0m1.578s 00:06:59.624 sys 0m0.094s 00:06:59.624 14:12:19 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.624 14:12:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.624 ************************************ 00:06:59.624 END TEST thread_poller_perf 00:06:59.624 ************************************ 00:06:59.624 14:12:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.624 14:12:19 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:59.624 14:12:19 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.624 14:12:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.624 ************************************ 00:06:59.624 START TEST thread_poller_perf 00:06:59.624 ************************************ 00:06:59.624 14:12:19 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.883 [2024-07-26 14:12:19.420651] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.883 [2024-07-26 14:12:19.420827] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64507 ] 00:06:59.883 [2024-07-26 14:12:19.589538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.141 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:00.142 [2024-07-26 14:12:19.737908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.519 ====================================== 00:07:01.519 busy:2203906910 (cyc) 00:07:01.519 total_run_count: 4617000 00:07:01.519 tsc_hz: 2200000000 (cyc) 00:07:01.519 ====================================== 00:07:01.519 poller_cost: 477 (cyc), 216 (nsec) 00:07:01.519 00:07:01.519 real 0m1.690s 00:07:01.519 user 0m1.494s 00:07:01.519 sys 0m0.088s 00:07:01.519 14:12:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.519 14:12:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 END TEST thread_poller_perf 00:07:01.520 ************************************ 00:07:01.520 14:12:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.520 00:07:01.520 real 0m3.655s 00:07:01.520 user 0m3.133s 00:07:01.520 sys 0m0.299s 00:07:01.520 14:12:21 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.520 14:12:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 END TEST thread 00:07:01.520 ************************************ 00:07:01.520 14:12:21 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:01.520 14:12:21 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.520 14:12:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.520 14:12:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.520 14:12:21 -- common/autotest_common.sh@10 -- # set +x 00:07:01.520 ************************************ 00:07:01.520 START TEST app_cmdline 00:07:01.520 ************************************ 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:01.520 * Looking for test storage... 00:07:01.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:01.520 14:12:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.520 14:12:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64587 00:07:01.520 14:12:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64587 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 64587 ']' 00:07:01.520 14:12:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.520 14:12:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.779 [2024-07-26 14:12:21.354417] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.779 [2024-07-26 14:12:21.354592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64587 ] 00:07:01.779 [2024-07-26 14:12:21.524883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.038 [2024-07-26 14:12:21.673271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.607 14:12:22 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.607 14:12:22 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:02.607 14:12:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:02.866 { 00:07:02.866 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:02.866 "fields": { 00:07:02.866 "major": 24, 00:07:02.866 "minor": 9, 00:07:02.866 "patch": 0, 00:07:02.866 "suffix": "-pre", 00:07:02.866 "commit": "704257090" 00:07:02.866 } 00:07:02.866 } 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.866 14:12:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:02.866 14:12:22 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.125 request: 00:07:03.125 { 00:07:03.125 "method": "env_dpdk_get_mem_stats", 00:07:03.125 "req_id": 1 00:07:03.125 } 00:07:03.125 Got JSON-RPC error response 00:07:03.125 response: 00:07:03.125 { 00:07:03.125 "code": -32601, 00:07:03.125 "message": "Method not found" 00:07:03.125 } 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.125 14:12:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64587 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 64587 ']' 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 64587 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64587 00:07:03.125 killing process with pid 64587 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64587' 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@969 -- # kill 64587 00:07:03.125 14:12:22 app_cmdline -- common/autotest_common.sh@974 -- # wait 64587 00:07:05.028 00:07:05.028 real 0m3.398s 00:07:05.028 user 0m3.866s 00:07:05.028 sys 0m0.468s 00:07:05.028 14:12:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.028 14:12:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.028 ************************************ 00:07:05.028 END TEST app_cmdline 00:07:05.028 ************************************ 00:07:05.028 14:12:24 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.028 14:12:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:05.028 14:12:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.028 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.028 ************************************ 00:07:05.028 START TEST version 00:07:05.028 ************************************ 00:07:05.028 14:12:24 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:05.028 * Looking for test storage... 00:07:05.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.028 14:12:24 version -- app/version.sh@17 -- # get_header_version major 00:07:05.028 14:12:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # cut -f2 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.028 14:12:24 version -- app/version.sh@17 -- # major=24 00:07:05.028 14:12:24 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.028 14:12:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # cut -f2 00:07:05.028 14:12:24 version -- app/version.sh@18 -- # minor=9 00:07:05.028 14:12:24 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.028 14:12:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # cut -f2 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.028 14:12:24 version -- app/version.sh@19 -- # patch=0 00:07:05.028 14:12:24 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.028 14:12:24 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # cut -f2 00:07:05.028 14:12:24 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.028 14:12:24 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.028 14:12:24 version -- app/version.sh@22 -- # version=24.9 00:07:05.028 14:12:24 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.028 14:12:24 version -- app/version.sh@28 -- # version=24.9rc0 00:07:05.028 14:12:24 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.029 14:12:24 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.029 14:12:24 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:05.029 14:12:24 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:05.029 00:07:05.029 real 0m0.151s 00:07:05.029 user 0m0.080s 00:07:05.029 sys 0m0.097s 00:07:05.029 14:12:24 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.029 14:12:24 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.029 ************************************ 00:07:05.029 END TEST version 00:07:05.029 ************************************ 00:07:05.287 14:12:24 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:05.287 14:12:24 -- spdk/autotest.sh@202 -- # uname -s 00:07:05.287 14:12:24 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:05.287 14:12:24 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:05.287 14:12:24 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:05.287 14:12:24 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:07:05.287 14:12:24 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.287 14:12:24 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.287 14:12:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.287 14:12:24 -- common/autotest_common.sh@10 -- # set +x 00:07:05.287 ************************************ 00:07:05.287 START TEST blockdev_nvme 00:07:05.287 ************************************ 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.287 * Looking for test storage... 00:07:05.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:05.287 14:12:24 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=64744 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 64744 00:07:05.287 14:12:24 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 64744 ']' 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.287 14:12:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.288 [2024-07-26 14:12:25.024002] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:05.288 [2024-07-26 14:12:25.024178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64744 ] 00:07:05.546 [2024-07-26 14:12:25.192759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.826 [2024-07-26 14:12:25.352702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.415 14:12:25 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.415 14:12:25 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:06.415 14:12:25 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:06.415 14:12:25 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:06.415 14:12:25 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:06.415 14:12:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.415 14:12:25 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.415 14:12:26 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.415 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.415 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:06.674 14:12:26 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.674 14:12:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2f412c1c-c0b0-4c25-8ea4-48b7cb644ed4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2f412c1c-c0b0-4c25-8ea4-48b7cb644ed4",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d53bc422-7cac-4530-b0ca-811268fec14b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d53bc422-7cac-4530-b0ca-811268fec14b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e54b7d95-eb66-4ffe-8714-2d90d92c2ebe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e54b7d95-eb66-4ffe-8714-2d90d92c2ebe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b3705b2d-33bd-4d05-a782-f958305c2da1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b3705b2d-33bd-4d05-a782-f958305c2da1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "82a701a4-0e06-4f03-af47-dc9dd24535ed"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "82a701a4-0e06-4f03-af47-dc9dd24535ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "73e64bf2-34f2-46a1-b558-32ed1c9fdf9c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "73e64bf2-34f2-46a1-b558-32ed1c9fdf9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:06.934 14:12:26 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 64744 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 64744 ']' 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 64744 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64744 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.934 killing process with pid 64744 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64744' 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 64744 00:07:06.934 14:12:26 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 64744 00:07:08.839 14:12:28 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:08.839 14:12:28 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.839 14:12:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:08.839 14:12:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.839 14:12:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.839 ************************************ 00:07:08.839 START TEST bdev_hello_world 00:07:08.839 ************************************ 00:07:08.839 14:12:28 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:08.839 [2024-07-26 14:12:28.388389] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.839 [2024-07-26 14:12:28.388568] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64834 ] 00:07:08.839 [2024-07-26 14:12:28.557614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.098 [2024-07-26 14:12:28.713520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.666 [2024-07-26 14:12:29.260389] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:09.666 [2024-07-26 14:12:29.260443] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:09.666 [2024-07-26 14:12:29.260465] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:09.666 [2024-07-26 14:12:29.263075] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:09.666 [2024-07-26 14:12:29.263650] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:09.666 [2024-07-26 14:12:29.263684] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:09.666 [2024-07-26 14:12:29.263884] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:09.666 00:07:09.666 [2024-07-26 14:12:29.263930] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:10.605 00:07:10.605 real 0m1.897s 00:07:10.605 user 0m1.594s 00:07:10.605 sys 0m0.197s 00:07:10.605 14:12:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.605 14:12:30 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:10.605 ************************************ 00:07:10.605 END TEST bdev_hello_world 00:07:10.605 ************************************ 00:07:10.605 14:12:30 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:10.605 14:12:30 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.605 14:12:30 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.605 14:12:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.605 ************************************ 00:07:10.605 START TEST bdev_bounds 00:07:10.605 ************************************ 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=64876 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.605 Process bdevio pid: 64876 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 64876' 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 64876 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 64876 ']' 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.605 14:12:30 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:10.605 [2024-07-26 14:12:30.334645] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.605 [2024-07-26 14:12:30.334802] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64876 ] 00:07:10.864 [2024-07-26 14:12:30.504949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.123 [2024-07-26 14:12:30.665583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.123 [2024-07-26 14:12:30.665723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.123 [2024-07-26 14:12:30.665745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.691 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.691 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:11.691 14:12:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:11.691 I/O targets: 00:07:11.691 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:11.691 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:11.691 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:11.691 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:11.691 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:11.691 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:11.691 00:07:11.691 00:07:11.691 CUnit - A unit testing framework for C - Version 2.1-3 00:07:11.691 http://cunit.sourceforge.net/ 00:07:11.691 00:07:11.691 00:07:11.691 Suite: bdevio tests on: Nvme3n1 00:07:11.691 Test: blockdev write read block ...passed 00:07:11.691 Test: blockdev write zeroes read block ...passed 00:07:11.691 Test: blockdev write zeroes read no split ...passed 00:07:11.691 Test: blockdev write zeroes read split ...passed 00:07:11.691 Test: blockdev write zeroes read split partial ...passed 00:07:11.691 Test: blockdev reset ...[2024-07-26 14:12:31.451487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:11.951 passed 00:07:11.951 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.455794] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:11.951 passed 00:07:11.951 Test: blockdev write read size > 128k ...passed 00:07:11.951 Test: blockdev write read invalid size ...passed 00:07:11.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.951 Test: blockdev write read max offset ...passed 00:07:11.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.951 Test: blockdev writev readv 8 blocks ...passed 00:07:11.951 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.951 Test: blockdev writev readv block ...passed 00:07:11.951 Test: blockdev writev readv size > 128k ...passed 00:07:11.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.951 Test: blockdev comparev and writev ...[2024-07-26 14:12:31.465189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27380a000 len:0x1000 00:07:11.951 [2024-07-26 14:12:31.465275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev nvme passthru rw ...passed 00:07:11.951 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.951 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.466181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.951 [2024-07-26 14:12:31.466238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev copy ...passed 00:07:11.951 Suite: bdevio tests on: Nvme2n3 00:07:11.951 Test: blockdev write read block ...passed 00:07:11.951 Test: blockdev write zeroes read block ...passed 00:07:11.951 Test: blockdev write zeroes read no split ...passed 00:07:11.951 Test: blockdev write zeroes read split ...passed 00:07:11.951 Test: blockdev write zeroes read split partial ...passed 00:07:11.951 Test: blockdev reset ...[2024-07-26 14:12:31.525740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:11.951 passed 00:07:11.951 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.529647] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:11.951 passed 00:07:11.951 Test: blockdev write read size > 128k ...passed 00:07:11.951 Test: blockdev write read invalid size ...passed 00:07:11.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.951 Test: blockdev write read max offset ...passed 00:07:11.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.951 Test: blockdev writev readv 8 blocks ...passed 00:07:11.951 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.951 Test: blockdev writev readv block ...passed 00:07:11.951 Test: blockdev writev readv size > 128k ...passed 00:07:11.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.951 Test: blockdev comparev and writev ...[2024-07-26 14:12:31.538178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x252604000 len:0x1000 00:07:11.951 [2024-07-26 14:12:31.538230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev nvme passthru rw ...passed 00:07:11.951 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.951 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.539017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.951 [2024-07-26 14:12:31.539056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev copy ...passed 00:07:11.951 Suite: bdevio tests on: Nvme2n2 00:07:11.951 Test: blockdev write read block ...passed 00:07:11.951 Test: blockdev write zeroes read block ...passed 00:07:11.951 Test: blockdev write zeroes read no split ...passed 00:07:11.951 Test: blockdev write zeroes read split ...passed 00:07:11.951 Test: blockdev write zeroes read split partial ...passed 00:07:11.951 Test: blockdev reset ...[2024-07-26 14:12:31.598212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:11.951 passed 00:07:11.951 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.602552] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:11.951 passed 00:07:11.951 Test: blockdev write read size > 128k ...passed 00:07:11.951 Test: blockdev write read invalid size ...passed 00:07:11.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.951 Test: blockdev write read max offset ...passed 00:07:11.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.951 Test: blockdev writev readv 8 blocks ...passed 00:07:11.951 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.951 Test: blockdev writev readv block ...passed 00:07:11.951 Test: blockdev writev readv size > 128k ...passed 00:07:11.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.951 Test: blockdev comparev and writev ...[2024-07-26 14:12:31.610748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29243a000 len:0x1000 00:07:11.951 [2024-07-26 14:12:31.610817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev nvme passthru rw ...passed 00:07:11.951 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:12:31.611734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.951 passed 00:07:11.951 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.611781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev copy ...passed 00:07:11.951 Suite: bdevio tests on: Nvme2n1 00:07:11.951 Test: blockdev write read block ...passed 00:07:11.951 Test: blockdev write zeroes read block ...passed 00:07:11.951 Test: blockdev write zeroes read no split ...passed 00:07:11.951 Test: blockdev write zeroes read split ...passed 00:07:11.951 Test: blockdev write zeroes read split partial ...passed 00:07:11.951 Test: blockdev reset ...[2024-07-26 14:12:31.674081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:11.951 passed 00:07:11.951 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.678138] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:11.951 passed 00:07:11.951 Test: blockdev write read size > 128k ...passed 00:07:11.951 Test: blockdev write read invalid size ...passed 00:07:11.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:11.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:11.951 Test: blockdev write read max offset ...passed 00:07:11.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:11.951 Test: blockdev writev readv 8 blocks ...passed 00:07:11.951 Test: blockdev writev readv 30 x 1block ...passed 00:07:11.951 Test: blockdev writev readv block ...passed 00:07:11.951 Test: blockdev writev readv size > 128k ...passed 00:07:11.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:11.951 Test: blockdev comparev and writev ...[2024-07-26 14:12:31.686716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292434000 len:0x1000 00:07:11.951 [2024-07-26 14:12:31.686785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev nvme passthru rw ...passed 00:07:11.951 Test: blockdev nvme passthru vendor specific ...passed 00:07:11.951 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.687743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:11.951 [2024-07-26 14:12:31.687787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:11.951 passed 00:07:11.951 Test: blockdev copy ...passed 00:07:11.951 Suite: bdevio tests on: Nvme1n1 00:07:11.951 Test: blockdev write read block ...passed 00:07:11.951 Test: blockdev write zeroes read block ...passed 00:07:11.951 Test: blockdev write zeroes read no split ...passed 00:07:12.211 Test: blockdev write zeroes read split ...passed 00:07:12.211 Test: blockdev write zeroes read split partial ...passed 00:07:12.211 Test: blockdev reset ...[2024-07-26 14:12:31.748066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:12.211 passed 00:07:12.211 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.751467] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.211 passed 00:07:12.211 Test: blockdev write read size > 128k ...passed 00:07:12.211 Test: blockdev write read invalid size ...passed 00:07:12.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.211 Test: blockdev write read max offset ...passed 00:07:12.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.211 Test: blockdev writev readv 8 blocks ...passed 00:07:12.211 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.211 Test: blockdev writev readv block ...passed 00:07:12.211 Test: blockdev writev readv size > 128k ...passed 00:07:12.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.211 Test: blockdev comparev and writev ...passed 00:07:12.211 Test: blockdev nvme passthru rw ...[2024-07-26 14:12:31.759637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292430000 len:0x1000 00:07:12.211 [2024-07-26 14:12:31.759705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.211 passed 00:07:12.211 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.211 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.760536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.211 [2024-07-26 14:12:31.760590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.211 passed 00:07:12.211 Test: blockdev copy ...passed 00:07:12.211 Suite: bdevio tests on: Nvme0n1 00:07:12.211 Test: blockdev write read block ...passed 00:07:12.211 Test: blockdev write zeroes read block ...passed 00:07:12.211 Test: blockdev write zeroes read no split ...passed 00:07:12.211 Test: blockdev write zeroes read split ...passed 00:07:12.211 Test: blockdev write zeroes read split partial ...passed 00:07:12.211 Test: blockdev reset ...[2024-07-26 14:12:31.827342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:12.211 passed 00:07:12.211 Test: blockdev write read 8 blocks ...[2024-07-26 14:12:31.831063] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:12.211 passed 00:07:12.211 Test: blockdev write read size > 128k ...passed 00:07:12.211 Test: blockdev write read invalid size ...passed 00:07:12.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.211 Test: blockdev write read max offset ...passed 00:07:12.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.211 Test: blockdev writev readv 8 blocks ...passed 00:07:12.211 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.211 Test: blockdev writev readv block ...passed 00:07:12.211 Test: blockdev writev readv size > 128k ...passed 00:07:12.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.211 Test: blockdev comparev and writev ...passed 00:07:12.211 Test: blockdev nvme passthru rw ...[2024-07-26 14:12:31.838498] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:12.211 separate metadata which is not supported yet. 00:07:12.211 passed 00:07:12.211 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.211 Test: blockdev nvme admin passthru ...[2024-07-26 14:12:31.839054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:12.211 [2024-07-26 14:12:31.839116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:12.211 passed 00:07:12.211 Test: blockdev copy ...passed 00:07:12.211 00:07:12.211 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.211 suites 6 6 n/a 0 0 00:07:12.211 tests 138 138 138 0 0 00:07:12.211 asserts 893 893 893 0 n/a 00:07:12.211 00:07:12.211 Elapsed time = 1.218 seconds 00:07:12.211 0 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 64876 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 64876 ']' 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 64876 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64876 00:07:12.211 killing process with pid 64876 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64876' 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 64876 00:07:12.211 14:12:31 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 64876 00:07:13.147 14:12:32 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:13.147 00:07:13.147 real 0m2.492s 00:07:13.147 user 0m6.145s 00:07:13.147 sys 0m0.357s 00:07:13.147 14:12:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.147 ************************************ 00:07:13.147 END TEST bdev_bounds 00:07:13.147 ************************************ 00:07:13.147 14:12:32 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 14:12:32 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.147 14:12:32 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:13.147 14:12:32 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.147 14:12:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 ************************************ 00:07:13.147 START TEST bdev_nbd 00:07:13.147 ************************************ 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=64930 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 64930 /var/tmp/spdk-nbd.sock 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 64930 ']' 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.147 14:12:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.147 [2024-07-26 14:12:32.877089] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.147 [2024-07-26 14:12:32.877220] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.405 [2024-07-26 14:12:33.036374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.664 [2024-07-26 14:12:33.187787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.232 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.491 14:12:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.491 1+0 records in 00:07:14.491 1+0 records out 00:07:14.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00229962 s, 1.8 MB/s 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.491 1+0 records in 00:07:14.491 1+0 records out 00:07:14.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615252 s, 6.7 MB/s 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.491 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.492 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.060 1+0 records in 00:07:15.060 1+0 records out 00:07:15.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527776 s, 7.8 MB/s 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.060 1+0 records in 00:07:15.060 1+0 records out 00:07:15.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000658168 s, 6.2 MB/s 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.060 14:12:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.319 1+0 records in 00:07:15.319 1+0 records out 00:07:15.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00151716 s, 2.7 MB/s 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.319 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:15.578 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.837 1+0 records in 00:07:15.837 1+0 records out 00:07:15.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741824 s, 5.5 MB/s 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd0", 00:07:15.837 "bdev_name": "Nvme0n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd1", 00:07:15.837 "bdev_name": "Nvme1n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd2", 00:07:15.837 "bdev_name": "Nvme2n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd3", 00:07:15.837 "bdev_name": "Nvme2n2" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd4", 00:07:15.837 "bdev_name": "Nvme2n3" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd5", 00:07:15.837 "bdev_name": "Nvme3n1" 00:07:15.837 } 00:07:15.837 ]' 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd0", 00:07:15.837 "bdev_name": "Nvme0n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd1", 00:07:15.837 "bdev_name": "Nvme1n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd2", 00:07:15.837 "bdev_name": "Nvme2n1" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd3", 00:07:15.837 "bdev_name": "Nvme2n2" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd4", 00:07:15.837 "bdev_name": "Nvme2n3" 00:07:15.837 }, 00:07:15.837 { 00:07:15.837 "nbd_device": "/dev/nbd5", 00:07:15.837 "bdev_name": "Nvme3n1" 00:07:15.837 } 00:07:15.837 ]' 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.837 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.096 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.097 14:12:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.355 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.356 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.356 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.615 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.873 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.131 14:12:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.390 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.649 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:17.908 /dev/nbd0 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.908 1+0 records in 00:07:17.908 1+0 records out 00:07:17.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485924 s, 8.4 MB/s 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.908 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:18.167 /dev/nbd1 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.167 1+0 records in 00:07:18.167 1+0 records out 00:07:18.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697975 s, 5.9 MB/s 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.167 14:12:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:18.426 /dev/nbd10 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.426 1+0 records in 00:07:18.426 1+0 records out 00:07:18.426 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858454 s, 4.8 MB/s 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.426 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:18.685 /dev/nbd11 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.944 1+0 records in 00:07:18.944 1+0 records out 00:07:18.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072064 s, 5.7 MB/s 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.944 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:19.203 /dev/nbd12 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.203 1+0 records in 00:07:19.203 1+0 records out 00:07:19.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992593 s, 4.1 MB/s 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.203 14:12:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.204 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.204 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.204 14:12:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:19.462 /dev/nbd13 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.462 1+0 records in 00:07:19.462 1+0 records out 00:07:19.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862445 s, 4.7 MB/s 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.462 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd0", 00:07:19.721 "bdev_name": "Nvme0n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd1", 00:07:19.721 "bdev_name": "Nvme1n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd10", 00:07:19.721 "bdev_name": "Nvme2n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd11", 00:07:19.721 "bdev_name": "Nvme2n2" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd12", 00:07:19.721 "bdev_name": "Nvme2n3" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd13", 00:07:19.721 "bdev_name": "Nvme3n1" 00:07:19.721 } 00:07:19.721 ]' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd0", 00:07:19.721 "bdev_name": "Nvme0n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd1", 00:07:19.721 "bdev_name": "Nvme1n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd10", 00:07:19.721 "bdev_name": "Nvme2n1" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd11", 00:07:19.721 "bdev_name": "Nvme2n2" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd12", 00:07:19.721 "bdev_name": "Nvme2n3" 00:07:19.721 }, 00:07:19.721 { 00:07:19.721 "nbd_device": "/dev/nbd13", 00:07:19.721 "bdev_name": "Nvme3n1" 00:07:19.721 } 00:07:19.721 ]' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.721 /dev/nbd1 00:07:19.721 /dev/nbd10 00:07:19.721 /dev/nbd11 00:07:19.721 /dev/nbd12 00:07:19.721 /dev/nbd13' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.721 /dev/nbd1 00:07:19.721 /dev/nbd10 00:07:19.721 /dev/nbd11 00:07:19.721 /dev/nbd12 00:07:19.721 /dev/nbd13' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.721 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.722 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:19.722 256+0 records in 00:07:19.722 256+0 records out 00:07:19.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00785011 s, 134 MB/s 00:07:19.722 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.722 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.996 256+0 records in 00:07:19.996 256+0 records out 00:07:19.996 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175424 s, 6.0 MB/s 00:07:19.996 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.996 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:20.267 256+0 records in 00:07:20.267 256+0 records out 00:07:20.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176077 s, 6.0 MB/s 00:07:20.267 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.267 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:20.267 256+0 records in 00:07:20.267 256+0 records out 00:07:20.267 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162853 s, 6.4 MB/s 00:07:20.267 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.267 14:12:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:20.526 256+0 records in 00:07:20.526 256+0 records out 00:07:20.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175873 s, 6.0 MB/s 00:07:20.526 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.526 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:20.786 256+0 records in 00:07:20.786 256+0 records out 00:07:20.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161162 s, 6.5 MB/s 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:20.786 256+0 records in 00:07:20.786 256+0 records out 00:07:20.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172257 s, 6.1 MB/s 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:20.786 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:21.044 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.045 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.303 14:12:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.562 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.820 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.079 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:22.337 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.338 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.338 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.338 14:12:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:22.596 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:22.596 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.597 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:07:22.855 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:23.114 malloc_lvol_verify 00:07:23.114 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:23.373 7d372743-68cf-41ea-8451-142d8ef92e2a 00:07:23.373 14:12:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:23.373 c9ac9159-5903-4e8f-b1a4-14c12daa8bf8 00:07:23.631 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:23.631 /dev/nbd0 00:07:23.631 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:07:23.890 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.890 Discarding device blocks: 0/4096 done 00:07:23.890 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:23.890 00:07:23.890 Allocating group tables: 0/1 done 00:07:23.890 Writing inode tables: 0/1 done 00:07:23.890 Creating journal (1024 blocks): done 00:07:23.890 Writing superblocks and filesystem accounting information: 0/1 done 00:07:23.890 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 64930 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 64930 ']' 00:07:23.890 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 64930 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64930 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:24.149 killing process with pid 64930 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64930' 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 64930 00:07:24.149 14:12:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 64930 00:07:25.086 14:12:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:25.086 00:07:25.086 real 0m11.884s 00:07:25.086 user 0m16.802s 00:07:25.086 sys 0m3.766s 00:07:25.086 14:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.086 14:12:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:25.086 ************************************ 00:07:25.086 END TEST bdev_nbd 00:07:25.086 ************************************ 00:07:25.086 14:12:44 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:25.086 14:12:44 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:25.086 skipping fio tests on NVMe due to multi-ns failures. 00:07:25.086 14:12:44 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:25.086 14:12:44 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:25.086 14:12:44 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.086 14:12:44 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:25.086 14:12:44 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.086 14:12:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.086 ************************************ 00:07:25.086 START TEST bdev_verify 00:07:25.086 ************************************ 00:07:25.086 14:12:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:25.086 [2024-07-26 14:12:44.827776] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:25.086 [2024-07-26 14:12:44.827993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65332 ] 00:07:25.345 [2024-07-26 14:12:44.998849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.605 [2024-07-26 14:12:45.151093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.605 [2024-07-26 14:12:45.151157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.173 Running I/O for 5 seconds... 00:07:31.481 00:07:31.481 Latency(us) 00:07:31.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:31.481 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0xbd0bd 00:07:31.481 Nvme0n1 : 5.05 1470.15 5.74 0.00 0.00 86819.58 17515.99 80549.70 00:07:31.481 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:31.481 Nvme0n1 : 5.06 1543.63 6.03 0.00 0.00 82640.51 16443.58 73400.32 00:07:31.481 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0xa0000 00:07:31.481 Nvme1n1 : 5.05 1469.77 5.74 0.00 0.00 86678.37 20256.58 77689.95 00:07:31.481 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0xa0000 length 0xa0000 00:07:31.481 Nvme1n1 : 5.06 1542.99 6.03 0.00 0.00 82514.23 19184.17 70540.57 00:07:31.481 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0x80000 00:07:31.481 Nvme2n1 : 5.05 1469.36 5.74 0.00 0.00 86570.17 19065.02 74830.20 00:07:31.481 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x80000 length 0x80000 00:07:31.481 Nvme2n1 : 5.06 1542.39 6.02 0.00 0.00 82354.57 17635.14 67680.81 00:07:31.481 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0x80000 00:07:31.481 Nvme2n2 : 5.05 1468.90 5.74 0.00 0.00 86432.42 18350.08 72447.07 00:07:31.481 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x80000 length 0x80000 00:07:31.481 Nvme2n2 : 5.06 1541.77 6.02 0.00 0.00 82212.28 16443.58 67680.81 00:07:31.481 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0x80000 00:07:31.481 Nvme2n3 : 5.06 1478.50 5.78 0.00 0.00 85731.89 3559.80 77213.32 00:07:31.481 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x80000 length 0x80000 00:07:31.481 Nvme2n3 : 5.08 1550.64 6.06 0.00 0.00 81674.62 4408.79 70540.57 00:07:31.481 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x0 length 0x20000 00:07:31.481 Nvme3n1 : 5.08 1487.55 5.81 0.00 0.00 85129.72 7060.01 80073.08 00:07:31.481 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:31.481 Verification LBA range: start 0x20000 length 0x20000 00:07:31.481 Nvme3n1 : 5.08 1549.43 6.05 0.00 0.00 81589.68 7208.96 73876.95 00:07:31.481 =================================================================================================================== 00:07:31.481 Total : 18115.10 70.76 0.00 0.00 84143.44 3559.80 80549.70 00:07:32.418 00:07:32.418 real 0m7.393s 00:07:32.418 user 0m13.555s 00:07:32.418 sys 0m0.257s 00:07:32.418 14:12:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.418 14:12:52 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:32.418 ************************************ 00:07:32.418 END TEST bdev_verify 00:07:32.418 ************************************ 00:07:32.418 14:12:52 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.418 14:12:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:32.418 14:12:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.418 14:12:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.418 ************************************ 00:07:32.418 START TEST bdev_verify_big_io 00:07:32.418 ************************************ 00:07:32.419 14:12:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:32.677 [2024-07-26 14:12:52.240029] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:32.677 [2024-07-26 14:12:52.240146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:07:32.677 [2024-07-26 14:12:52.391582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:32.936 [2024-07-26 14:12:52.538784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.936 [2024-07-26 14:12:52.538810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.873 Running I/O for 5 seconds... 00:07:40.444 00:07:40.444 Latency(us) 00:07:40.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.444 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0xbd0b 00:07:40.444 Nvme0n1 : 5.65 131.14 8.20 0.00 0.00 942485.75 20494.89 930372.89 00:07:40.444 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:40.444 Nvme0n1 : 5.60 137.06 8.57 0.00 0.00 912821.99 33602.09 930372.89 00:07:40.444 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0xa000 00:07:40.444 Nvme1n1 : 5.65 131.64 8.23 0.00 0.00 915290.96 78643.20 888429.85 00:07:40.444 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0xa000 length 0xa000 00:07:40.444 Nvme1n1 : 5.61 136.99 8.56 0.00 0.00 889883.93 83886.08 823608.79 00:07:40.444 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0x8000 00:07:40.444 Nvme2n1 : 5.66 135.77 8.49 0.00 0.00 870157.34 65297.69 911307.87 00:07:40.444 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x8000 length 0x8000 00:07:40.444 Nvme2n1 : 5.61 136.94 8.56 0.00 0.00 864570.03 83886.08 850299.81 00:07:40.444 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0x8000 00:07:40.444 Nvme2n2 : 5.75 138.12 8.63 0.00 0.00 826445.13 65297.69 934185.89 00:07:40.444 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x8000 length 0x8000 00:07:40.444 Nvme2n2 : 5.68 139.52 8.72 0.00 0.00 822071.19 64344.44 876990.84 00:07:40.444 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0x8000 00:07:40.444 Nvme2n3 : 5.81 149.12 9.32 0.00 0.00 750305.08 17635.14 964689.92 00:07:40.444 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x8000 length 0x8000 00:07:40.444 Nvme2n3 : 5.79 150.79 9.42 0.00 0.00 742272.08 37653.41 903681.86 00:07:40.444 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x0 length 0x2000 00:07:40.444 Nvme3n1 : 5.82 158.13 9.88 0.00 0.00 688510.34 889.95 991380.95 00:07:40.444 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:40.444 Verification LBA range: start 0x2000 length 0x2000 00:07:40.444 Nvme3n1 : 5.80 165.66 10.35 0.00 0.00 661929.76 1251.14 922746.88 00:07:40.444 =================================================================================================================== 00:07:40.444 Total : 1710.88 106.93 0.00 0.00 816427.85 889.95 991380.95 00:07:41.012 00:07:41.012 real 0m8.547s 00:07:41.012 user 0m15.910s 00:07:41.012 sys 0m0.249s 00:07:41.012 14:13:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.012 ************************************ 00:07:41.012 END TEST bdev_verify_big_io 00:07:41.012 14:13:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:41.012 ************************************ 00:07:41.013 14:13:00 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.013 14:13:00 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:41.013 14:13:00 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.013 14:13:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:41.013 ************************************ 00:07:41.013 START TEST bdev_write_zeroes 00:07:41.013 ************************************ 00:07:41.013 14:13:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:41.271 [2024-07-26 14:13:00.851504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:41.271 [2024-07-26 14:13:00.851651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65534 ] 00:07:41.271 [2024-07-26 14:13:01.009371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.530 [2024-07-26 14:13:01.158819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.096 Running I/O for 1 seconds... 00:07:43.029 00:07:43.029 Latency(us) 00:07:43.029 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.029 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme0n1 : 1.02 8692.42 33.95 0.00 0.00 14674.43 7060.01 24188.74 00:07:43.029 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme1n1 : 1.02 8678.38 33.90 0.00 0.00 14672.70 11677.32 24188.74 00:07:43.029 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme2n1 : 1.02 8665.19 33.85 0.00 0.00 14644.89 11379.43 21924.77 00:07:43.029 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme2n2 : 1.02 8702.22 33.99 0.00 0.00 14545.49 8936.73 19422.49 00:07:43.029 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme2n3 : 1.02 8689.26 33.94 0.00 0.00 14536.75 8996.31 18945.86 00:07:43.029 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:43.029 Nvme3n1 : 1.03 8676.28 33.89 0.00 0.00 14509.23 6970.65 18707.55 00:07:43.029 =================================================================================================================== 00:07:43.029 Total : 52103.75 203.53 0.00 0.00 14597.01 6970.65 24188.74 00:07:43.969 00:07:43.969 real 0m2.897s 00:07:43.969 user 0m2.584s 00:07:43.969 sys 0m0.196s 00:07:43.969 14:13:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.969 ************************************ 00:07:43.969 END TEST bdev_write_zeroes 00:07:43.969 ************************************ 00:07:43.969 14:13:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:43.969 14:13:03 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.969 14:13:03 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:43.969 14:13:03 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.969 14:13:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.970 ************************************ 00:07:43.970 START TEST bdev_json_nonenclosed 00:07:43.970 ************************************ 00:07:43.970 14:13:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.228 [2024-07-26 14:13:03.797341] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:44.228 [2024-07-26 14:13:03.797492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65587 ] 00:07:44.228 [2024-07-26 14:13:03.953710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.486 [2024-07-26 14:13:04.116164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.486 [2024-07-26 14:13:04.116268] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:44.486 [2024-07-26 14:13:04.116295] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:44.486 [2024-07-26 14:13:04.116311] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.745 00:07:44.745 real 0m0.728s 00:07:44.745 user 0m0.511s 00:07:44.745 sys 0m0.111s 00:07:44.745 14:13:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.745 14:13:04 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:44.745 ************************************ 00:07:44.745 END TEST bdev_json_nonenclosed 00:07:44.745 ************************************ 00:07:44.745 14:13:04 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.745 14:13:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:44.745 14:13:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.745 14:13:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:44.745 ************************************ 00:07:44.745 START TEST bdev_json_nonarray 00:07:44.745 ************************************ 00:07:44.745 14:13:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:45.004 [2024-07-26 14:13:04.604005] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:45.004 [2024-07-26 14:13:04.604183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65618 ] 00:07:45.263 [2024-07-26 14:13:04.774824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.263 [2024-07-26 14:13:04.930023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.263 [2024-07-26 14:13:04.930138] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:45.263 [2024-07-26 14:13:04.930168] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:45.263 [2024-07-26 14:13:04.930184] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:45.831 00:07:45.831 real 0m0.787s 00:07:45.831 user 0m0.551s 00:07:45.831 sys 0m0.132s 00:07:45.831 14:13:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.831 14:13:05 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:45.831 ************************************ 00:07:45.831 END TEST bdev_json_nonarray 00:07:45.831 ************************************ 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:45.831 14:13:05 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:45.831 00:07:45.831 real 0m40.529s 00:07:45.831 user 1m1.506s 00:07:45.831 sys 0m6.055s 00:07:45.831 14:13:05 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.831 14:13:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:45.831 ************************************ 00:07:45.831 END TEST blockdev_nvme 00:07:45.831 ************************************ 00:07:45.831 14:13:05 -- spdk/autotest.sh@217 -- # uname -s 00:07:45.831 14:13:05 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:07:45.831 14:13:05 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.831 14:13:05 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.831 14:13:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.831 14:13:05 -- common/autotest_common.sh@10 -- # set +x 00:07:45.831 ************************************ 00:07:45.831 START TEST blockdev_nvme_gpt 00:07:45.831 ************************************ 00:07:45.831 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:45.831 * Looking for test storage... 00:07:45.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65694 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:45.831 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 65694 00:07:45.831 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 65694 ']' 00:07:45.831 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.832 14:13:05 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:45.832 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:45.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.832 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.832 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:45.832 14:13:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.832 [2024-07-26 14:13:05.587779] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:45.832 [2024-07-26 14:13:05.587987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65694 ] 00:07:46.091 [2024-07-26 14:13:05.742164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.349 [2024-07-26 14:13:05.904500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.917 14:13:06 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:46.917 14:13:06 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:46.917 14:13:06 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:46.917 14:13:06 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:46.917 14:13:06 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:47.176 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.435 Waiting for block devices as requested 00:07:47.435 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.435 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.693 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.693 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:52.987 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:52.987 BYT; 00:07:52.987 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:52.987 BYT; 00:07:52.987 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.987 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:52.987 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.988 14:13:12 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.988 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:52.988 14:13:12 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:53.923 The operation has completed successfully. 00:07:53.923 14:13:13 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:54.857 The operation has completed successfully. 00:07:54.857 14:13:14 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:55.424 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.992 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.992 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.992 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.992 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:56.251 14:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.251 14:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.251 [] 00:07:56.251 14:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:56.251 14:13:15 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:56.251 14:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.251 14:13:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.511 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.511 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:56.770 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:56.770 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:56.771 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a87e4487-5628-47de-a2bf-1d81b2327889"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a87e4487-5628-47de-a2bf-1d81b2327889",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6223947d-6bc6-4617-b12c-63b5b9ff291f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6223947d-6bc6-4617-b12c-63b5b9ff291f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a99ad43c-5a01-4410-830a-27a95efcfb58"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a99ad43c-5a01-4410-830a-27a95efcfb58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "315c942d-b7b4-492d-b120-684310453fab"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "315c942d-b7b4-492d-b120-684310453fab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6a0c6daa-8d5e-4c5c-a102-9d7d6e2f61cf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6a0c6daa-8d5e-4c5c-a102-9d7d6e2f61cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:56.771 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:56.771 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:56.771 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:56.771 14:13:16 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 65694 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 65694 ']' 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 65694 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65694 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.771 killing process with pid 65694 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65694' 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 65694 00:07:56.771 14:13:16 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 65694 00:07:58.676 14:13:18 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.676 14:13:18 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.676 14:13:18 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:58.676 14:13:18 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.676 14:13:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.676 ************************************ 00:07:58.676 START TEST bdev_hello_world 00:07:58.676 ************************************ 00:07:58.676 14:13:18 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.676 [2024-07-26 14:13:18.246810] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:58.676 [2024-07-26 14:13:18.246990] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66320 ] 00:07:58.676 [2024-07-26 14:13:18.417679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.935 [2024-07-26 14:13:18.579148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.504 [2024-07-26 14:13:19.134112] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.504 [2024-07-26 14:13:19.134169] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.504 [2024-07-26 14:13:19.134219] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.504 [2024-07-26 14:13:19.136777] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.504 [2024-07-26 14:13:19.137413] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.504 [2024-07-26 14:13:19.137478] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.504 [2024-07-26 14:13:19.137825] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.504 00:07:59.504 [2024-07-26 14:13:19.137881] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:00.438 00:08:00.438 real 0m1.931s 00:08:00.438 user 0m1.623s 00:08:00.438 sys 0m0.197s 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.438 ************************************ 00:08:00.438 END TEST bdev_hello_world 00:08:00.438 ************************************ 00:08:00.438 14:13:20 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:00.438 14:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.438 14:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.438 14:13:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.438 ************************************ 00:08:00.438 START TEST bdev_bounds 00:08:00.438 ************************************ 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=66363 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.438 Process bdevio pid: 66363 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 66363' 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 66363 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 66363 ']' 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.438 14:13:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.695 [2024-07-26 14:13:20.235871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:00.695 [2024-07-26 14:13:20.236058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66363 ] 00:08:00.695 [2024-07-26 14:13:20.408331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.954 [2024-07-26 14:13:20.562023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.954 [2024-07-26 14:13:20.562319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.954 [2024-07-26 14:13:20.562330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.521 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.521 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:08:01.521 14:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:01.521 I/O targets: 00:08:01.521 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:01.521 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:01.521 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:01.521 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.521 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.521 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.521 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:01.521 00:08:01.521 00:08:01.521 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.521 http://cunit.sourceforge.net/ 00:08:01.521 00:08:01.521 00:08:01.521 Suite: bdevio tests on: Nvme3n1 00:08:01.521 Test: blockdev write read block ...passed 00:08:01.521 Test: blockdev write zeroes read block ...passed 00:08:01.521 Test: blockdev write zeroes read no split ...passed 00:08:01.780 Test: blockdev write zeroes read split ...passed 00:08:01.780 Test: blockdev write zeroes read split partial ...passed 00:08:01.780 Test: blockdev reset ...[2024-07-26 14:13:21.316121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:01.780 [2024-07-26 14:13:21.319718] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.780 passed 00:08:01.780 Test: blockdev write read 8 blocks ...passed 00:08:01.780 Test: blockdev write read size > 128k ...passed 00:08:01.780 Test: blockdev write read invalid size ...passed 00:08:01.780 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.781 Test: blockdev write read max offset ...passed 00:08:01.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.781 Test: blockdev writev readv 8 blocks ...passed 00:08:01.781 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.781 Test: blockdev writev readv block ...passed 00:08:01.781 Test: blockdev writev readv size > 128k ...passed 00:08:01.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.781 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.327652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26f006000 len:0x1000 00:08:01.781 [2024-07-26 14:13:21.327705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev nvme passthru rw ...passed 00:08:01.781 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:13:21.328553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.781 [2024-07-26 14:13:21.328592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev nvme admin passthru ...passed 00:08:01.781 Test: blockdev copy ...passed 00:08:01.781 Suite: bdevio tests on: Nvme2n3 00:08:01.781 Test: blockdev write read block ...passed 00:08:01.781 Test: blockdev write zeroes read block ...passed 00:08:01.781 Test: blockdev write zeroes read no split ...passed 00:08:01.781 Test: blockdev write zeroes read split ...passed 00:08:01.781 Test: blockdev write zeroes read split partial ...passed 00:08:01.781 Test: blockdev reset ...[2024-07-26 14:13:21.388068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:01.781 [2024-07-26 14:13:21.392263] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.781 passed 00:08:01.781 Test: blockdev write read 8 blocks ...passed 00:08:01.781 Test: blockdev write read size > 128k ...passed 00:08:01.781 Test: blockdev write read invalid size ...passed 00:08:01.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.781 Test: blockdev write read max offset ...passed 00:08:01.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.781 Test: blockdev writev readv 8 blocks ...passed 00:08:01.781 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.781 Test: blockdev writev readv block ...passed 00:08:01.781 Test: blockdev writev readv size > 128k ...passed 00:08:01.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.781 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.399951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f83c000 len:0x1000 00:08:01.781 [2024-07-26 14:13:21.399998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev nvme passthru rw ...passed 00:08:01.781 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.781 Test: blockdev nvme admin passthru ...[2024-07-26 14:13:21.400768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.781 [2024-07-26 14:13:21.400804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev copy ...passed 00:08:01.781 Suite: bdevio tests on: Nvme2n2 00:08:01.781 Test: blockdev write read block ...passed 00:08:01.781 Test: blockdev write zeroes read block ...passed 00:08:01.781 Test: blockdev write zeroes read no split ...passed 00:08:01.781 Test: blockdev write zeroes read split ...passed 00:08:01.781 Test: blockdev write zeroes read split partial ...passed 00:08:01.781 Test: blockdev reset ...[2024-07-26 14:13:21.461855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:01.781 passed 00:08:01.781 Test: blockdev write read 8 blocks ...[2024-07-26 14:13:21.465807] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:01.781 passed 00:08:01.781 Test: blockdev write read size > 128k ...passed 00:08:01.781 Test: blockdev write read invalid size ...passed 00:08:01.781 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.781 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.781 Test: blockdev write read max offset ...passed 00:08:01.781 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.781 Test: blockdev writev readv 8 blocks ...passed 00:08:01.781 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.781 Test: blockdev writev readv block ...passed 00:08:01.781 Test: blockdev writev readv size > 128k ...passed 00:08:01.781 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.781 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.473734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f836000 len:0x1000 00:08:01.781 [2024-07-26 14:13:21.473782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev nvme passthru rw ...passed 00:08:01.781 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.781 Test: blockdev nvme admin passthru ...[2024-07-26 14:13:21.474756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.781 [2024-07-26 14:13:21.474812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.781 passed 00:08:01.781 Test: blockdev copy ...passed 00:08:01.781 Suite: bdevio tests on: Nvme2n1 00:08:01.781 Test: blockdev write read block ...passed 00:08:01.781 Test: blockdev write zeroes read block ...passed 00:08:01.781 Test: blockdev write zeroes read no split ...passed 00:08:01.781 Test: blockdev write zeroes read split ...passed 00:08:02.040 Test: blockdev write zeroes read split partial ...passed 00:08:02.040 Test: blockdev reset ...[2024-07-26 14:13:21.553857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:02.040 passed 00:08:02.040 Test: blockdev write read 8 blocks ...[2024-07-26 14:13:21.557843] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.040 passed 00:08:02.040 Test: blockdev write read size > 128k ...passed 00:08:02.040 Test: blockdev write read invalid size ...passed 00:08:02.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.040 Test: blockdev write read max offset ...passed 00:08:02.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.040 Test: blockdev writev readv 8 blocks ...passed 00:08:02.040 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.040 Test: blockdev writev readv block ...passed 00:08:02.040 Test: blockdev writev readv size > 128k ...passed 00:08:02.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.040 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.565896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28f832000 len:0x1000 00:08:02.040 [2024-07-26 14:13:21.565962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.040 passed 00:08:02.040 Test: blockdev nvme passthru rw ...passed 00:08:02.040 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:13:21.566855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.040 [2024-07-26 14:13:21.566892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.040 passed 00:08:02.040 Test: blockdev nvme admin passthru ...passed 00:08:02.040 Test: blockdev copy ...passed 00:08:02.040 Suite: bdevio tests on: Nvme1n1p2 00:08:02.040 Test: blockdev write read block ...passed 00:08:02.040 Test: blockdev write zeroes read block ...passed 00:08:02.040 Test: blockdev write zeroes read no split ...passed 00:08:02.040 Test: blockdev write zeroes read split ...passed 00:08:02.040 Test: blockdev write zeroes read split partial ...passed 00:08:02.040 Test: blockdev reset ...[2024-07-26 14:13:21.654557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:02.040 passed 00:08:02.040 Test: blockdev write read 8 blocks ...[2024-07-26 14:13:21.658282] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.040 passed 00:08:02.040 Test: blockdev write read size > 128k ...passed 00:08:02.040 Test: blockdev write read invalid size ...passed 00:08:02.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.040 Test: blockdev write read max offset ...passed 00:08:02.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.040 Test: blockdev writev readv 8 blocks ...passed 00:08:02.040 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.040 Test: blockdev writev readv block ...passed 00:08:02.040 Test: blockdev writev readv size > 128k ...passed 00:08:02.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.040 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.667400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x28f82e000 len:0x1000 00:08:02.040 [2024-07-26 14:13:21.667464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.040 passed 00:08:02.040 Test: blockdev nvme passthru rw ...passed 00:08:02.040 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.040 Test: blockdev nvme admin passthru ...passed 00:08:02.040 Test: blockdev copy ...passed 00:08:02.040 Suite: bdevio tests on: Nvme1n1p1 00:08:02.040 Test: blockdev write read block ...passed 00:08:02.040 Test: blockdev write zeroes read block ...passed 00:08:02.040 Test: blockdev write zeroes read no split ...passed 00:08:02.040 Test: blockdev write zeroes read split ...passed 00:08:02.040 Test: blockdev write zeroes read split partial ...passed 00:08:02.040 Test: blockdev reset ...[2024-07-26 14:13:21.733237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:02.040 passed 00:08:02.040 Test: blockdev write read 8 blocks ...[2024-07-26 14:13:21.736764] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.040 passed 00:08:02.040 Test: blockdev write read size > 128k ...passed 00:08:02.040 Test: blockdev write read invalid size ...passed 00:08:02.040 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.040 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.040 Test: blockdev write read max offset ...passed 00:08:02.040 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.040 Test: blockdev writev readv 8 blocks ...passed 00:08:02.040 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.040 Test: blockdev writev readv block ...passed 00:08:02.040 Test: blockdev writev readv size > 128k ...passed 00:08:02.040 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.040 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x28000e000 len:0x1000 00:08:02.040 [2024-07-26 14:13:21.746032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.040 passed 00:08:02.040 Test: blockdev nvme passthru rw ...passed 00:08:02.040 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.040 Test: blockdev nvme admin passthru ...passed 00:08:02.040 Test: blockdev copy ...passed 00:08:02.040 Suite: bdevio tests on: Nvme0n1 00:08:02.040 Test: blockdev write read block ...passed 00:08:02.040 Test: blockdev write zeroes read block ...passed 00:08:02.040 Test: blockdev write zeroes read no split ...passed 00:08:02.040 Test: blockdev write zeroes read split ...passed 00:08:02.299 Test: blockdev write zeroes read split partial ...passed 00:08:02.299 Test: blockdev reset ...[2024-07-26 14:13:21.817929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:02.299 [2024-07-26 14:13:21.821571] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:02.299 passed 00:08:02.299 Test: blockdev write read 8 blocks ...passed 00:08:02.299 Test: blockdev write read size > 128k ...passed 00:08:02.299 Test: blockdev write read invalid size ...passed 00:08:02.299 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.299 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.299 Test: blockdev write read max offset ...passed 00:08:02.299 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.299 Test: blockdev writev readv 8 blocks ...passed 00:08:02.299 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.299 Test: blockdev writev readv block ...passed 00:08:02.299 Test: blockdev writev readv size > 128k ...passed 00:08:02.299 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.299 Test: blockdev comparev and writev ...[2024-07-26 14:13:21.828970] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:02.299 separate metadata which is not supported yet. 00:08:02.299 passed 00:08:02.299 Test: blockdev nvme passthru rw ...passed 00:08:02.299 Test: blockdev nvme passthru vendor specific ...[2024-07-26 14:13:21.829577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:02.299 [2024-07-26 14:13:21.829621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:02.299 passed 00:08:02.299 Test: blockdev nvme admin passthru ...passed 00:08:02.299 Test: blockdev copy ...passed 00:08:02.299 00:08:02.299 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.299 suites 7 7 n/a 0 0 00:08:02.299 tests 161 161 161 0 0 00:08:02.299 asserts 1025 1025 1025 0 n/a 00:08:02.299 00:08:02.299 Elapsed time = 1.587 seconds 00:08:02.299 0 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 66363 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 66363 ']' 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 66363 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66363 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.299 killing process with pid 66363 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66363' 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 66363 00:08:02.299 14:13:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 66363 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:03.235 00:08:03.235 real 0m2.600s 00:08:03.235 user 0m6.431s 00:08:03.235 sys 0m0.355s 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:03.235 ************************************ 00:08:03.235 END TEST bdev_bounds 00:08:03.235 ************************************ 00:08:03.235 14:13:22 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.235 14:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.235 14:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.235 14:13:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.235 ************************************ 00:08:03.235 START TEST bdev_nbd 00:08:03.235 ************************************ 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=66423 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 66423 /var/tmp/spdk-nbd.sock 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 66423 ']' 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.235 14:13:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.235 [2024-07-26 14:13:22.873530] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:03.235 [2024-07-26 14:13:22.873678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.493 [2024-07-26 14:13:23.028740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.494 [2024-07-26 14:13:23.177055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.060 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.060 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:08:04.060 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.061 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:04.319 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:04.319 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.320 14:13:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.320 1+0 records in 00:08:04.320 1+0 records out 00:08:04.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529248 s, 7.7 MB/s 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.320 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.578 1+0 records in 00:08:04.578 1+0 records out 00:08:04.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063938 s, 6.4 MB/s 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.578 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.579 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.579 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.579 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.579 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:04.837 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.837 1+0 records in 00:08:04.837 1+0 records out 00:08:04.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862707 s, 4.7 MB/s 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.838 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.097 1+0 records in 00:08:05.097 1+0 records out 00:08:05.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769693 s, 5.3 MB/s 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.097 14:13:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.356 1+0 records in 00:08:05.356 1+0 records out 00:08:05.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816338 s, 5.0 MB/s 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.356 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.939 1+0 records in 00:08:05.939 1+0 records out 00:08:05.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771652 s, 5.3 MB/s 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:05.939 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:08:06.211 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:06.211 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:06.211 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:06.211 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.211 1+0 records in 00:08:06.211 1+0 records out 00:08:06.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992348 s, 4.1 MB/s 00:08:06.211 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:06.212 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.470 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd0", 00:08:06.471 "bdev_name": "Nvme0n1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd1", 00:08:06.471 "bdev_name": "Nvme1n1p1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd2", 00:08:06.471 "bdev_name": "Nvme1n1p2" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd3", 00:08:06.471 "bdev_name": "Nvme2n1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd4", 00:08:06.471 "bdev_name": "Nvme2n2" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd5", 00:08:06.471 "bdev_name": "Nvme2n3" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd6", 00:08:06.471 "bdev_name": "Nvme3n1" 00:08:06.471 } 00:08:06.471 ]' 00:08:06.471 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:06.471 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd0", 00:08:06.471 "bdev_name": "Nvme0n1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd1", 00:08:06.471 "bdev_name": "Nvme1n1p1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd2", 00:08:06.471 "bdev_name": "Nvme1n1p2" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd3", 00:08:06.471 "bdev_name": "Nvme2n1" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd4", 00:08:06.471 "bdev_name": "Nvme2n2" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd5", 00:08:06.471 "bdev_name": "Nvme2n3" 00:08:06.471 }, 00:08:06.471 { 00:08:06.471 "nbd_device": "/dev/nbd6", 00:08:06.471 "bdev_name": "Nvme3n1" 00:08:06.471 } 00:08:06.471 ]' 00:08:06.471 14:13:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.471 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.729 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.988 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.247 14:13:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.506 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.766 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.025 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.284 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.284 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.284 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.284 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.285 14:13:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:08.544 /dev/nbd0 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.544 1+0 records in 00:08:08.544 1+0 records out 00:08:08.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539233 s, 7.6 MB/s 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.544 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:08.803 /dev/nbd1 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:08.803 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.062 1+0 records in 00:08:09.062 1+0 records out 00:08:09.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755512 s, 5.4 MB/s 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:09.062 /dev/nbd10 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.062 1+0 records in 00:08:09.062 1+0 records out 00:08:09.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861041 s, 4.8 MB/s 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.062 14:13:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:09.321 /dev/nbd11 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.321 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.581 1+0 records in 00:08:09.581 1+0 records out 00:08:09.581 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102137 s, 4.0 MB/s 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.581 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:09.840 /dev/nbd12 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.840 1+0 records in 00:08:09.840 1+0 records out 00:08:09.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000882998 s, 4.6 MB/s 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.840 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:10.099 /dev/nbd13 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.099 1+0 records in 00:08:10.099 1+0 records out 00:08:10.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805718 s, 5.1 MB/s 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.099 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:10.359 /dev/nbd14 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.359 1+0 records in 00:08:10.359 1+0 records out 00:08:10.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114896 s, 3.6 MB/s 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.359 14:13:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd0", 00:08:10.618 "bdev_name": "Nvme0n1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd1", 00:08:10.618 "bdev_name": "Nvme1n1p1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd10", 00:08:10.618 "bdev_name": "Nvme1n1p2" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd11", 00:08:10.618 "bdev_name": "Nvme2n1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd12", 00:08:10.618 "bdev_name": "Nvme2n2" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd13", 00:08:10.618 "bdev_name": "Nvme2n3" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd14", 00:08:10.618 "bdev_name": "Nvme3n1" 00:08:10.618 } 00:08:10.618 ]' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd0", 00:08:10.618 "bdev_name": "Nvme0n1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd1", 00:08:10.618 "bdev_name": "Nvme1n1p1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd10", 00:08:10.618 "bdev_name": "Nvme1n1p2" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd11", 00:08:10.618 "bdev_name": "Nvme2n1" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd12", 00:08:10.618 "bdev_name": "Nvme2n2" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd13", 00:08:10.618 "bdev_name": "Nvme2n3" 00:08:10.618 }, 00:08:10.618 { 00:08:10.618 "nbd_device": "/dev/nbd14", 00:08:10.618 "bdev_name": "Nvme3n1" 00:08:10.618 } 00:08:10.618 ]' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.618 /dev/nbd1 00:08:10.618 /dev/nbd10 00:08:10.618 /dev/nbd11 00:08:10.618 /dev/nbd12 00:08:10.618 /dev/nbd13 00:08:10.618 /dev/nbd14' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.618 /dev/nbd1 00:08:10.618 /dev/nbd10 00:08:10.618 /dev/nbd11 00:08:10.618 /dev/nbd12 00:08:10.618 /dev/nbd13 00:08:10.618 /dev/nbd14' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:10.618 256+0 records in 00:08:10.618 256+0 records out 00:08:10.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00758208 s, 138 MB/s 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.618 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.877 256+0 records in 00:08:10.877 256+0 records out 00:08:10.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18989 s, 5.5 MB/s 00:08:10.877 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.877 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.877 256+0 records in 00:08:10.877 256+0 records out 00:08:10.877 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191982 s, 5.5 MB/s 00:08:10.877 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.877 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:11.136 256+0 records in 00:08:11.136 256+0 records out 00:08:11.136 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189141 s, 5.5 MB/s 00:08:11.136 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.136 14:13:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:11.395 256+0 records in 00:08:11.395 256+0 records out 00:08:11.395 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191251 s, 5.5 MB/s 00:08:11.395 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.395 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:11.654 256+0 records in 00:08:11.654 256+0 records out 00:08:11.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189578 s, 5.5 MB/s 00:08:11.654 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.654 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:11.654 256+0 records in 00:08:11.654 256+0 records out 00:08:11.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161342 s, 6.5 MB/s 00:08:11.654 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.654 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:11.913 256+0 records in 00:08:11.913 256+0 records out 00:08:11.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188563 s, 5.6 MB/s 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.913 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.482 14:13:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.741 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.999 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.258 14:13:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.516 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.775 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.033 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.034 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:14.292 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.293 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:14.293 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:14.293 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:14.293 14:13:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.551 malloc_lvol_verify 00:08:14.551 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.810 1516a9d5-e15c-4e65-9fba-d667282e23be 00:08:14.810 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:15.069 a8e1cbdf-e81d-4fac-9977-9f6a9733aba2 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:15.069 /dev/nbd0 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:15.069 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.069 Discarding device blocks: 0/4096 done 00:08:15.069 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:15.069 00:08:15.069 Allocating group tables: 0/1 done 00:08:15.069 Writing inode tables: 0/1 done 00:08:15.069 Creating journal (1024 blocks): done 00:08:15.069 Writing superblocks and filesystem accounting information: 0/1 done 00:08:15.069 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.069 14:13:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 66423 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 66423 ']' 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 66423 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66423 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66423' 00:08:15.328 killing process with pid 66423 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 66423 00:08:15.328 14:13:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 66423 00:08:16.707 14:13:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:16.707 00:08:16.707 real 0m13.429s 00:08:16.707 user 0m18.605s 00:08:16.707 sys 0m4.382s 00:08:16.707 14:13:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.707 14:13:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.707 ************************************ 00:08:16.707 END TEST bdev_nbd 00:08:16.707 ************************************ 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:16.707 skipping fio tests on NVMe due to multi-ns failures. 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:16.707 14:13:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.707 14:13:36 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:16.707 14:13:36 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.707 14:13:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.707 ************************************ 00:08:16.707 START TEST bdev_verify 00:08:16.707 ************************************ 00:08:16.707 14:13:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.707 [2024-07-26 14:13:36.349448] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:16.707 [2024-07-26 14:13:36.349597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66861 ] 00:08:16.965 [2024-07-26 14:13:36.506721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.965 [2024-07-26 14:13:36.685913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.965 [2024-07-26 14:13:36.685946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.996 Running I/O for 5 seconds... 00:08:23.280 00:08:23.280 Latency(us) 00:08:23.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.280 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0xbd0bd 00:08:23.280 Nvme0n1 : 5.07 1337.64 5.23 0.00 0.00 95449.06 22878.02 86269.21 00:08:23.280 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:23.280 Nvme0n1 : 5.08 1235.34 4.83 0.00 0.00 103380.66 18588.39 90558.84 00:08:23.280 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x4ff80 00:08:23.280 Nvme1n1p1 : 5.07 1337.16 5.22 0.00 0.00 95353.40 22639.71 79119.83 00:08:23.280 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:23.280 Nvme1n1p1 : 5.08 1234.46 4.82 0.00 0.00 103197.58 19660.80 86269.21 00:08:23.280 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x4ff7f 00:08:23.280 Nvme1n1p2 : 5.08 1336.71 5.22 0.00 0.00 95178.02 22043.93 73876.95 00:08:23.280 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:23.280 Nvme1n1p2 : 5.08 1233.60 4.82 0.00 0.00 103054.29 20733.21 83409.45 00:08:23.280 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x80000 00:08:23.280 Nvme2n1 : 5.08 1336.32 5.22 0.00 0.00 95006.62 21686.46 69110.69 00:08:23.280 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x80000 length 0x80000 00:08:23.280 Nvme2n1 : 5.09 1233.08 4.82 0.00 0.00 102862.97 21090.68 84362.71 00:08:23.280 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x80000 00:08:23.280 Nvme2n2 : 5.08 1335.75 5.22 0.00 0.00 94822.42 22043.93 71493.82 00:08:23.280 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x80000 length 0x80000 00:08:23.280 Nvme2n2 : 5.09 1232.41 4.81 0.00 0.00 102660.62 21924.77 87699.08 00:08:23.280 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x80000 00:08:23.280 Nvme2n3 : 5.08 1335.16 5.22 0.00 0.00 94641.95 20971.52 74353.57 00:08:23.280 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x80000 length 0x80000 00:08:23.280 Nvme2n3 : 5.09 1231.90 4.81 0.00 0.00 102487.26 19779.96 90082.21 00:08:23.280 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x0 length 0x20000 00:08:23.280 Nvme3n1 : 5.09 1345.81 5.26 0.00 0.00 93809.63 2368.23 78166.57 00:08:23.280 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:23.280 Verification LBA range: start 0x20000 length 0x20000 00:08:23.280 Nvme3n1 : 5.09 1231.45 4.81 0.00 0.00 102319.85 13702.98 91512.09 00:08:23.280 =================================================================================================================== 00:08:23.280 Total : 17996.78 70.30 0.00 0.00 98713.70 2368.23 91512.09 00:08:24.215 00:08:24.215 real 0m7.404s 00:08:24.215 user 0m13.592s 00:08:24.215 sys 0m0.233s 00:08:24.215 14:13:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.215 14:13:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:24.215 ************************************ 00:08:24.215 END TEST bdev_verify 00:08:24.215 ************************************ 00:08:24.215 14:13:43 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.215 14:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:24.215 14:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.215 14:13:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:24.215 ************************************ 00:08:24.215 START TEST bdev_verify_big_io 00:08:24.215 ************************************ 00:08:24.215 14:13:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:24.215 [2024-07-26 14:13:43.805240] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:24.215 [2024-07-26 14:13:43.805375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66960 ] 00:08:24.215 [2024-07-26 14:13:43.961623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.474 [2024-07-26 14:13:44.110310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.474 [2024-07-26 14:13:44.110310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.409 Running I/O for 5 seconds... 00:08:31.974 00:08:31.974 Latency(us) 00:08:31.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.974 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0xbd0b 00:08:31.974 Nvme0n1 : 5.88 119.64 7.48 0.00 0.00 1023017.98 20494.89 1197283.14 00:08:31.974 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:31.974 Nvme0n1 : 5.86 104.89 6.56 0.00 0.00 1154021.83 32410.53 1197283.14 00:08:31.974 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x4ff8 00:08:31.974 Nvme1n1p1 : 5.84 120.59 7.54 0.00 0.00 985080.89 87222.46 1006632.96 00:08:31.974 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:31.974 Nvme1n1p1 : 5.87 109.10 6.82 0.00 0.00 1097361.59 102951.10 1014258.97 00:08:31.974 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x4ff7 00:08:31.974 Nvme1n1p2 : 5.98 124.31 7.77 0.00 0.00 928466.87 45994.36 827421.79 00:08:31.974 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:31.974 Nvme1n1p2 : 5.87 109.05 6.82 0.00 0.00 1064459.54 145847.39 949437.91 00:08:31.974 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x8000 00:08:31.974 Nvme2n1 : 5.98 124.79 7.80 0.00 0.00 898180.54 46232.67 770226.73 00:08:31.974 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x8000 length 0x8000 00:08:31.974 Nvme2n1 : 5.96 111.10 6.94 0.00 0.00 1014237.37 87222.46 968502.92 00:08:31.974 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x8000 00:08:31.974 Nvme2n2 : 6.03 126.77 7.92 0.00 0.00 860646.14 85792.58 983754.94 00:08:31.974 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x8000 length 0x8000 00:08:31.974 Nvme2n2 : 5.98 117.65 7.35 0.00 0.00 941846.51 21567.30 991380.95 00:08:31.974 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x8000 00:08:31.974 Nvme2n3 : 6.10 115.65 7.23 0.00 0.00 920506.23 20614.05 1982761.89 00:08:31.974 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x8000 length 0x8000 00:08:31.974 Nvme2n3 : 6.07 122.60 7.66 0.00 0.00 873198.65 34793.66 1014258.97 00:08:31.974 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x0 length 0x2000 00:08:31.974 Nvme3n1 : 6.11 96.83 6.05 0.00 0.00 1075529.39 3902.37 1998013.91 00:08:31.974 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:31.974 Verification LBA range: start 0x2000 length 0x2000 00:08:31.974 Nvme3n1 : 6.08 136.81 8.55 0.00 0.00 766046.45 1452.22 1037136.99 00:08:31.974 =================================================================================================================== 00:08:31.974 Total : 1639.77 102.49 0.00 0.00 962897.76 1452.22 1998013.91 00:08:32.912 ************************************ 00:08:32.912 END TEST bdev_verify_big_io 00:08:32.912 ************************************ 00:08:32.912 00:08:32.912 real 0m8.936s 00:08:32.912 user 0m16.661s 00:08:32.912 sys 0m0.288s 00:08:32.912 14:13:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.912 14:13:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:33.171 14:13:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.171 14:13:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:33.171 14:13:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.171 14:13:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.171 ************************************ 00:08:33.171 START TEST bdev_write_zeroes 00:08:33.171 ************************************ 00:08:33.171 14:13:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.171 [2024-07-26 14:13:52.818720] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:33.171 [2024-07-26 14:13:52.818944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67074 ] 00:08:33.431 [2024-07-26 14:13:52.993176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.431 [2024-07-26 14:13:53.188782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.365 Running I/O for 1 seconds... 00:08:35.301 00:08:35.301 Latency(us) 00:08:35.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.301 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme0n1 : 1.02 7625.57 29.79 0.00 0.00 16739.92 10247.45 33840.41 00:08:35.301 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme1n1p1 : 1.02 7614.48 29.74 0.00 0.00 16731.67 10783.65 24784.52 00:08:35.301 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme1n1p2 : 1.02 7603.34 29.70 0.00 0.00 16705.48 10843.23 24069.59 00:08:35.301 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme2n1 : 1.02 7592.96 29.66 0.00 0.00 16698.50 11141.12 24307.90 00:08:35.301 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme2n2 : 1.02 7582.01 29.62 0.00 0.00 16686.91 11439.01 23235.49 00:08:35.301 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme2n3 : 1.02 7572.38 29.58 0.00 0.00 16663.44 11141.12 22163.08 00:08:35.301 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.301 Nvme3n1 : 1.02 7621.09 29.77 0.00 0.00 16592.90 9294.20 21805.61 00:08:35.301 =================================================================================================================== 00:08:35.301 Total : 53211.83 207.86 0.00 0.00 16688.29 9294.20 33840.41 00:08:36.681 00:08:36.681 real 0m3.296s 00:08:36.681 user 0m2.963s 00:08:36.681 sys 0m0.209s 00:08:36.681 14:13:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.681 ************************************ 00:08:36.681 END TEST bdev_write_zeroes 00:08:36.681 ************************************ 00:08:36.681 14:13:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:36.681 14:13:56 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.681 14:13:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:36.681 14:13:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.681 14:13:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.681 ************************************ 00:08:36.681 START TEST bdev_json_nonenclosed 00:08:36.681 ************************************ 00:08:36.681 14:13:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.681 [2024-07-26 14:13:56.171472] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:36.681 [2024-07-26 14:13:56.171648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67133 ] 00:08:36.682 [2024-07-26 14:13:56.347254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.941 [2024-07-26 14:13:56.541941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.941 [2024-07-26 14:13:56.542100] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:36.941 [2024-07-26 14:13:56.542129] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:36.941 [2024-07-26 14:13:56.542146] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.517 00:08:37.517 real 0m0.896s 00:08:37.517 user 0m0.641s 00:08:37.517 sys 0m0.148s 00:08:37.517 14:13:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.517 14:13:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:37.517 ************************************ 00:08:37.517 END TEST bdev_json_nonenclosed 00:08:37.517 ************************************ 00:08:37.517 14:13:57 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.517 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:37.517 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.517 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.517 ************************************ 00:08:37.517 START TEST bdev_json_nonarray 00:08:37.517 ************************************ 00:08:37.517 14:13:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:37.517 [2024-07-26 14:13:57.110111] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.517 [2024-07-26 14:13:57.110782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67163 ] 00:08:37.793 [2024-07-26 14:13:57.280994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.793 [2024-07-26 14:13:57.470736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.793 [2024-07-26 14:13:57.470889] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:37.793 [2024-07-26 14:13:57.470923] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:37.793 [2024-07-26 14:13:57.470952] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:38.360 00:08:38.360 real 0m0.877s 00:08:38.360 user 0m0.646s 00:08:38.360 sys 0m0.124s 00:08:38.360 ************************************ 00:08:38.360 END TEST bdev_json_nonarray 00:08:38.360 ************************************ 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:38.360 14:13:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:38.360 14:13:57 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:38.360 14:13:57 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:38.360 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.360 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.360 14:13:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:38.360 ************************************ 00:08:38.360 START TEST bdev_gpt_uuid 00:08:38.360 ************************************ 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:38.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67189 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67189 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67189 ']' 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.360 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.361 14:13:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:38.361 [2024-07-26 14:13:58.076021] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:38.361 [2024-07-26 14:13:58.076465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67189 ] 00:08:38.619 [2024-07-26 14:13:58.257374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.878 [2024-07-26 14:13:58.515236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.816 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.816 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:39.816 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:39.816 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.816 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.075 Some configs were skipped because the RPC state that can call them passed over. 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:40.075 { 00:08:40.075 "name": "Nvme1n1p1", 00:08:40.075 "aliases": [ 00:08:40.075 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:40.075 ], 00:08:40.075 "product_name": "GPT Disk", 00:08:40.075 "block_size": 4096, 00:08:40.075 "num_blocks": 655104, 00:08:40.075 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:40.075 "assigned_rate_limits": { 00:08:40.075 "rw_ios_per_sec": 0, 00:08:40.075 "rw_mbytes_per_sec": 0, 00:08:40.075 "r_mbytes_per_sec": 0, 00:08:40.075 "w_mbytes_per_sec": 0 00:08:40.075 }, 00:08:40.075 "claimed": false, 00:08:40.075 "zoned": false, 00:08:40.075 "supported_io_types": { 00:08:40.075 "read": true, 00:08:40.075 "write": true, 00:08:40.075 "unmap": true, 00:08:40.075 "flush": true, 00:08:40.075 "reset": true, 00:08:40.075 "nvme_admin": false, 00:08:40.075 "nvme_io": false, 00:08:40.075 "nvme_io_md": false, 00:08:40.075 "write_zeroes": true, 00:08:40.075 "zcopy": false, 00:08:40.075 "get_zone_info": false, 00:08:40.075 "zone_management": false, 00:08:40.075 "zone_append": false, 00:08:40.075 "compare": true, 00:08:40.075 "compare_and_write": false, 00:08:40.075 "abort": true, 00:08:40.075 "seek_hole": false, 00:08:40.075 "seek_data": false, 00:08:40.075 "copy": true, 00:08:40.075 "nvme_iov_md": false 00:08:40.075 }, 00:08:40.075 "driver_specific": { 00:08:40.075 "gpt": { 00:08:40.075 "base_bdev": "Nvme1n1", 00:08:40.075 "offset_blocks": 256, 00:08:40.075 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:40.075 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:40.075 "partition_name": "SPDK_TEST_first" 00:08:40.075 } 00:08:40.075 } 00:08:40.075 } 00:08:40.075 ]' 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:40.075 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:40.076 { 00:08:40.076 "name": "Nvme1n1p2", 00:08:40.076 "aliases": [ 00:08:40.076 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:40.076 ], 00:08:40.076 "product_name": "GPT Disk", 00:08:40.076 "block_size": 4096, 00:08:40.076 "num_blocks": 655103, 00:08:40.076 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:40.076 "assigned_rate_limits": { 00:08:40.076 "rw_ios_per_sec": 0, 00:08:40.076 "rw_mbytes_per_sec": 0, 00:08:40.076 "r_mbytes_per_sec": 0, 00:08:40.076 "w_mbytes_per_sec": 0 00:08:40.076 }, 00:08:40.076 "claimed": false, 00:08:40.076 "zoned": false, 00:08:40.076 "supported_io_types": { 00:08:40.076 "read": true, 00:08:40.076 "write": true, 00:08:40.076 "unmap": true, 00:08:40.076 "flush": true, 00:08:40.076 "reset": true, 00:08:40.076 "nvme_admin": false, 00:08:40.076 "nvme_io": false, 00:08:40.076 "nvme_io_md": false, 00:08:40.076 "write_zeroes": true, 00:08:40.076 "zcopy": false, 00:08:40.076 "get_zone_info": false, 00:08:40.076 "zone_management": false, 00:08:40.076 "zone_append": false, 00:08:40.076 "compare": true, 00:08:40.076 "compare_and_write": false, 00:08:40.076 "abort": true, 00:08:40.076 "seek_hole": false, 00:08:40.076 "seek_data": false, 00:08:40.076 "copy": true, 00:08:40.076 "nvme_iov_md": false 00:08:40.076 }, 00:08:40.076 "driver_specific": { 00:08:40.076 "gpt": { 00:08:40.076 "base_bdev": "Nvme1n1", 00:08:40.076 "offset_blocks": 655360, 00:08:40.076 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:40.076 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:40.076 "partition_name": "SPDK_TEST_second" 00:08:40.076 } 00:08:40.076 } 00:08:40.076 } 00:08:40.076 ]' 00:08:40.076 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67189 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67189 ']' 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67189 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.335 14:13:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67189 00:08:40.335 killing process with pid 67189 00:08:40.335 14:14:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.335 14:14:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.335 14:14:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67189' 00:08:40.335 14:14:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67189 00:08:40.335 14:14:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67189 00:08:42.869 00:08:42.869 real 0m4.255s 00:08:42.869 user 0m4.567s 00:08:42.869 sys 0m0.471s 00:08:42.869 14:14:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.869 ************************************ 00:08:42.869 END TEST bdev_gpt_uuid 00:08:42.869 ************************************ 00:08:42.869 14:14:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:42.869 14:14:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:42.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.128 Waiting for block devices as requested 00:08:43.128 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.386 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.386 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.386 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.670 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:48.670 14:14:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:48.670 14:14:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:48.670 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.670 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.670 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:48.670 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:48.670 14:14:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:48.670 ************************************ 00:08:48.670 END TEST blockdev_nvme_gpt 00:08:48.670 ************************************ 00:08:48.670 00:08:48.670 real 1m3.027s 00:08:48.670 user 1m19.839s 00:08:48.670 sys 0m9.393s 00:08:48.670 14:14:08 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.670 14:14:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.928 14:14:08 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:48.928 14:14:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:48.928 14:14:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.928 14:14:08 -- common/autotest_common.sh@10 -- # set +x 00:08:48.928 ************************************ 00:08:48.928 START TEST nvme 00:08:48.928 ************************************ 00:08:48.929 14:14:08 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:48.929 * Looking for test storage... 00:08:48.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.929 14:14:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.064 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.064 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.064 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.064 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.064 14:14:09 nvme -- nvme/nvme.sh@79 -- # uname 00:08:50.064 14:14:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:50.064 14:14:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:50.064 14:14:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:50.064 Waiting for stub to ready for secondary processes... 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1071 -- # stubpid=67834 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/67834 ]] 00:08:50.064 14:14:09 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:50.323 [2024-07-26 14:14:09.875574] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:50.323 [2024-07-26 14:14:09.875750] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:51.258 [2024-07-26 14:14:10.672664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.258 14:14:10 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:51.258 14:14:10 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/67834 ]] 00:08:51.258 14:14:10 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:51.258 [2024-07-26 14:14:10.891343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.258 [2024-07-26 14:14:10.891425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.258 [2024-07-26 14:14:10.891442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.258 [2024-07-26 14:14:10.908318] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:51.258 [2024-07-26 14:14:10.908365] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.258 [2024-07-26 14:14:10.921082] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:51.258 [2024-07-26 14:14:10.921301] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:51.258 [2024-07-26 14:14:10.923685] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.258 [2024-07-26 14:14:10.923980] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:51.258 [2024-07-26 14:14:10.924064] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:51.258 [2024-07-26 14:14:10.926767] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.258 [2024-07-26 14:14:10.927029] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:51.258 [2024-07-26 14:14:10.927120] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:51.258 [2024-07-26 14:14:10.930294] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.258 [2024-07-26 14:14:10.930543] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:51.258 [2024-07-26 14:14:10.930644] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:51.258 [2024-07-26 14:14:10.930714] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:51.258 [2024-07-26 14:14:10.930787] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:52.196 done. 00:08:52.196 14:14:11 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:52.196 14:14:11 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:52.196 14:14:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.196 14:14:11 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:52.196 14:14:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.196 14:14:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.196 ************************************ 00:08:52.196 START TEST nvme_reset 00:08:52.196 ************************************ 00:08:52.196 14:14:11 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.454 Initializing NVMe Controllers 00:08:52.454 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:52.454 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:52.454 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:52.454 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:52.454 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:52.454 00:08:52.454 ************************************ 00:08:52.454 END TEST nvme_reset 00:08:52.454 ************************************ 00:08:52.454 real 0m0.295s 00:08:52.454 user 0m0.119s 00:08:52.454 sys 0m0.127s 00:08:52.454 14:14:12 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.454 14:14:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:52.454 14:14:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:52.454 14:14:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.454 14:14:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.454 14:14:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.454 ************************************ 00:08:52.454 START TEST nvme_identify 00:08:52.454 ************************************ 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:52.454 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:52.454 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:52.454 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.454 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:52.454 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.713 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:08:52.713 14:14:12 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.713 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:52.975 [2024-07-26 14:14:12.522526] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 67868 terminated unexpected 00:08:52.975 ===================================================== 00:08:52.975 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.975 ===================================================== 00:08:52.975 Controller Capabilities/Features 00:08:52.975 ================================ 00:08:52.975 Vendor ID: 1b36 00:08:52.975 Subsystem Vendor ID: 1af4 00:08:52.975 Serial Number: 12340 00:08:52.975 Model Number: QEMU NVMe Ctrl 00:08:52.975 Firmware Version: 8.0.0 00:08:52.975 Recommended Arb Burst: 6 00:08:52.975 IEEE OUI Identifier: 00 54 52 00:08:52.975 Multi-path I/O 00:08:52.975 May have multiple subsystem ports: No 00:08:52.975 May have multiple controllers: No 00:08:52.975 Associated with SR-IOV VF: No 00:08:52.975 Max Data Transfer Size: 524288 00:08:52.975 Max Number of Namespaces: 256 00:08:52.975 Max Number of I/O Queues: 64 00:08:52.975 NVMe Specification Version (VS): 1.4 00:08:52.975 NVMe Specification Version (Identify): 1.4 00:08:52.975 Maximum Queue Entries: 2048 00:08:52.975 Contiguous Queues Required: Yes 00:08:52.975 Arbitration Mechanisms Supported 00:08:52.975 Weighted Round Robin: Not Supported 00:08:52.975 Vendor Specific: Not Supported 00:08:52.975 Reset Timeout: 7500 ms 00:08:52.975 Doorbell Stride: 4 bytes 00:08:52.975 NVM Subsystem Reset: Not Supported 00:08:52.975 Command Sets Supported 00:08:52.975 NVM Command Set: Supported 00:08:52.975 Boot Partition: Not Supported 00:08:52.975 Memory Page Size Minimum: 4096 bytes 00:08:52.975 Memory Page Size Maximum: 65536 bytes 00:08:52.975 Persistent Memory Region: Not Supported 00:08:52.975 Optional Asynchronous Events Supported 00:08:52.975 Namespace Attribute Notices: Supported 00:08:52.975 Firmware Activation Notices: Not Supported 00:08:52.975 ANA Change Notices: Not Supported 00:08:52.975 PLE Aggregate Log Change Notices: Not Supported 00:08:52.975 LBA Status Info Alert Notices: Not Supported 00:08:52.975 EGE Aggregate Log Change Notices: Not Supported 00:08:52.975 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.975 Zone Descriptor Change Notices: Not Supported 00:08:52.975 Discovery Log Change Notices: Not Supported 00:08:52.975 Controller Attributes 00:08:52.975 128-bit Host Identifier: Not Supported 00:08:52.975 Non-Operational Permissive Mode: Not Supported 00:08:52.975 NVM Sets: Not Supported 00:08:52.975 Read Recovery Levels: Not Supported 00:08:52.975 Endurance Groups: Not Supported 00:08:52.975 Predictable Latency Mode: Not Supported 00:08:52.975 Traffic Based Keep ALive: Not Supported 00:08:52.975 Namespace Granularity: Not Supported 00:08:52.975 SQ Associations: Not Supported 00:08:52.975 UUID List: Not Supported 00:08:52.975 Multi-Domain Subsystem: Not Supported 00:08:52.975 Fixed Capacity Management: Not Supported 00:08:52.975 Variable Capacity Management: Not Supported 00:08:52.975 Delete Endurance Group: Not Supported 00:08:52.975 Delete NVM Set: Not Supported 00:08:52.975 Extended LBA Formats Supported: Supported 00:08:52.975 Flexible Data Placement Supported: Not Supported 00:08:52.975 00:08:52.975 Controller Memory Buffer Support 00:08:52.975 ================================ 00:08:52.975 Supported: No 00:08:52.975 00:08:52.975 Persistent Memory Region Support 00:08:52.975 ================================ 00:08:52.975 Supported: No 00:08:52.975 00:08:52.975 Admin Command Set Attributes 00:08:52.975 ============================ 00:08:52.975 Security Send/Receive: Not Supported 00:08:52.975 Format NVM: Supported 00:08:52.975 Firmware Activate/Download: Not Supported 00:08:52.975 Namespace Management: Supported 00:08:52.975 Device Self-Test: Not Supported 00:08:52.975 Directives: Supported 00:08:52.975 NVMe-MI: Not Supported 00:08:52.975 Virtualization Management: Not Supported 00:08:52.975 Doorbell Buffer Config: Supported 00:08:52.975 Get LBA Status Capability: Not Supported 00:08:52.975 Command & Feature Lockdown Capability: Not Supported 00:08:52.975 Abort Command Limit: 4 00:08:52.975 Async Event Request Limit: 4 00:08:52.975 Number of Firmware Slots: N/A 00:08:52.975 Firmware Slot 1 Read-Only: N/A 00:08:52.975 Firmware Activation Without Reset: N/A 00:08:52.975 Multiple Update Detection Support: N/A 00:08:52.975 Firmware Update Granularity: No Information Provided 00:08:52.975 Per-Namespace SMART Log: Yes 00:08:52.976 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.976 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:52.976 Command Effects Log Page: Supported 00:08:52.976 Get Log Page Extended Data: Supported 00:08:52.976 Telemetry Log Pages: Not Supported 00:08:52.976 Persistent Event Log Pages: Not Supported 00:08:52.976 Supported Log Pages Log Page: May Support 00:08:52.976 Commands Supported & Effects Log Page: Not Supported 00:08:52.976 Feature Identifiers & Effects Log Page:May Support 00:08:52.976 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.976 Data Area 4 for Telemetry Log: Not Supported 00:08:52.976 Error Log Page Entries Supported: 1 00:08:52.976 Keep Alive: Not Supported 00:08:52.976 00:08:52.976 NVM Command Set Attributes 00:08:52.976 ========================== 00:08:52.976 Submission Queue Entry Size 00:08:52.976 Max: 64 00:08:52.976 Min: 64 00:08:52.976 Completion Queue Entry Size 00:08:52.976 Max: 16 00:08:52.976 Min: 16 00:08:52.976 Number of Namespaces: 256 00:08:52.976 Compare Command: Supported 00:08:52.976 Write Uncorrectable Command: Not Supported 00:08:52.976 Dataset Management Command: Supported 00:08:52.976 Write Zeroes Command: Supported 00:08:52.976 Set Features Save Field: Supported 00:08:52.976 Reservations: Not Supported 00:08:52.976 Timestamp: Supported 00:08:52.976 Copy: Supported 00:08:52.976 Volatile Write Cache: Present 00:08:52.976 Atomic Write Unit (Normal): 1 00:08:52.976 Atomic Write Unit (PFail): 1 00:08:52.976 Atomic Compare & Write Unit: 1 00:08:52.976 Fused Compare & Write: Not Supported 00:08:52.976 Scatter-Gather List 00:08:52.976 SGL Command Set: Supported 00:08:52.976 SGL Keyed: Not Supported 00:08:52.976 SGL Bit Bucket Descriptor: Not Supported 00:08:52.976 SGL Metadata Pointer: Not Supported 00:08:52.976 Oversized SGL: Not Supported 00:08:52.976 SGL Metadata Address: Not Supported 00:08:52.976 SGL Offset: Not Supported 00:08:52.976 Transport SGL Data Block: Not Supported 00:08:52.976 Replay Protected Memory Block: Not Supported 00:08:52.976 00:08:52.976 Firmware Slot Information 00:08:52.976 ========================= 00:08:52.976 Active slot: 1 00:08:52.976 Slot 1 Firmware Revision: 1.0 00:08:52.976 00:08:52.976 00:08:52.976 Commands Supported and Effects 00:08:52.976 ============================== 00:08:52.976 Admin Commands 00:08:52.976 -------------- 00:08:52.976 Delete I/O Submission Queue (00h): Supported 00:08:52.976 Create I/O Submission Queue (01h): Supported 00:08:52.976 Get Log Page (02h): Supported 00:08:52.976 Delete I/O Completion Queue (04h): Supported 00:08:52.976 Create I/O Completion Queue (05h): Supported 00:08:52.976 Identify (06h): Supported 00:08:52.976 Abort (08h): Supported 00:08:52.976 Set Features (09h): Supported 00:08:52.976 Get Features (0Ah): Supported 00:08:52.976 Asynchronous Event Request (0Ch): Supported 00:08:52.976 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.976 Directive Send (19h): Supported 00:08:52.976 Directive Receive (1Ah): Supported 00:08:52.976 Virtualization Management (1Ch): Supported 00:08:52.976 Doorbell Buffer Config (7Ch): Supported 00:08:52.976 Format NVM (80h): Supported LBA-Change 00:08:52.976 I/O Commands 00:08:52.976 ------------ 00:08:52.976 Flush (00h): Supported LBA-Change 00:08:52.976 Write (01h): Supported LBA-Change 00:08:52.976 Read (02h): Supported 00:08:52.976 Compare (05h): Supported 00:08:52.976 Write Zeroes (08h): Supported LBA-Change 00:08:52.976 Dataset Management (09h): Supported LBA-Change 00:08:52.976 Unknown (0Ch): Supported 00:08:52.976 Unknown (12h): Supported 00:08:52.976 Copy (19h): Supported LBA-Change 00:08:52.976 Unknown (1Dh): Supported LBA-Change 00:08:52.976 00:08:52.976 Error Log 00:08:52.976 ========= 00:08:52.976 00:08:52.976 Arbitration 00:08:52.976 =========== 00:08:52.976 Arbitration Burst: no limit 00:08:52.976 00:08:52.976 Power Management 00:08:52.976 ================ 00:08:52.976 Number of Power States: 1 00:08:52.976 Current Power State: Power State #0 00:08:52.976 Power State #0: 00:08:52.976 Max Power: 25.00 W 00:08:52.976 Non-Operational State: Operational 00:08:52.976 Entry Latency: 16 microseconds 00:08:52.976 Exit Latency: 4 microseconds 00:08:52.976 Relative Read Throughput: 0 00:08:52.976 Relative Read Latency: 0 00:08:52.976 Relative Write Throughput: 0 00:08:52.976 Relative Write Latency: 0 00:08:52.976 Idle Power[2024-07-26 14:14:12.523873] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 67868 terminated unexpected 00:08:52.976 : Not Reported 00:08:52.976 Active Power: Not Reported 00:08:52.976 Non-Operational Permissive Mode: Not Supported 00:08:52.976 00:08:52.976 Health Information 00:08:52.976 ================== 00:08:52.976 Critical Warnings: 00:08:52.976 Available Spare Space: OK 00:08:52.976 Temperature: OK 00:08:52.976 Device Reliability: OK 00:08:52.976 Read Only: No 00:08:52.976 Volatile Memory Backup: OK 00:08:52.976 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.976 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.976 Available Spare: 0% 00:08:52.976 Available Spare Threshold: 0% 00:08:52.976 Life Percentage Used: 0% 00:08:52.976 Data Units Read: 706 00:08:52.976 Data Units Written: 597 00:08:52.976 Host Read Commands: 32876 00:08:52.976 Host Write Commands: 31914 00:08:52.976 Controller Busy Time: 0 minutes 00:08:52.976 Power Cycles: 0 00:08:52.976 Power On Hours: 0 hours 00:08:52.976 Unsafe Shutdowns: 0 00:08:52.976 Unrecoverable Media Errors: 0 00:08:52.976 Lifetime Error Log Entries: 0 00:08:52.976 Warning Temperature Time: 0 minutes 00:08:52.976 Critical Temperature Time: 0 minutes 00:08:52.976 00:08:52.976 Number of Queues 00:08:52.976 ================ 00:08:52.976 Number of I/O Submission Queues: 64 00:08:52.976 Number of I/O Completion Queues: 64 00:08:52.976 00:08:52.976 ZNS Specific Controller Data 00:08:52.976 ============================ 00:08:52.976 Zone Append Size Limit: 0 00:08:52.976 00:08:52.976 00:08:52.976 Active Namespaces 00:08:52.976 ================= 00:08:52.976 Namespace ID:1 00:08:52.976 Error Recovery Timeout: Unlimited 00:08:52.976 Command Set Identifier: NVM (00h) 00:08:52.976 Deallocate: Supported 00:08:52.976 Deallocated/Unwritten Error: Supported 00:08:52.976 Deallocated Read Value: All 0x00 00:08:52.976 Deallocate in Write Zeroes: Not Supported 00:08:52.976 Deallocated Guard Field: 0xFFFF 00:08:52.976 Flush: Supported 00:08:52.976 Reservation: Not Supported 00:08:52.976 Metadata Transferred as: Separate Metadata Buffer 00:08:52.976 Namespace Sharing Capabilities: Private 00:08:52.976 Size (in LBAs): 1548666 (5GiB) 00:08:52.976 Capacity (in LBAs): 1548666 (5GiB) 00:08:52.976 Utilization (in LBAs): 1548666 (5GiB) 00:08:52.976 Thin Provisioning: Not Supported 00:08:52.976 Per-NS Atomic Units: No 00:08:52.976 Maximum Single Source Range Length: 128 00:08:52.976 Maximum Copy Length: 128 00:08:52.976 Maximum Source Range Count: 128 00:08:52.976 NGUID/EUI64 Never Reused: No 00:08:52.976 Namespace Write Protected: No 00:08:52.976 Number of LBA Formats: 8 00:08:52.976 Current LBA Format: LBA Format #07 00:08:52.976 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.976 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.976 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.976 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.976 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.976 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.976 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.976 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.976 00:08:52.976 NVM Specific Namespace Data 00:08:52.976 =========================== 00:08:52.976 Logical Block Storage Tag Mask: 0 00:08:52.976 Protection Information Capabilities: 00:08:52.976 16b Guard Protection Information Storage Tag Support: No 00:08:52.976 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.976 Storage Tag Check Read Support: No 00:08:52.976 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.976 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.977 ===================================================== 00:08:52.977 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.977 ===================================================== 00:08:52.977 Controller Capabilities/Features 00:08:52.977 ================================ 00:08:52.977 Vendor ID: 1b36 00:08:52.977 Subsystem Vendor ID: 1af4 00:08:52.977 Serial Number: 12341 00:08:52.977 Model Number: QEMU NVMe Ctrl 00:08:52.977 Firmware Version: 8.0.0 00:08:52.977 Recommended Arb Burst: 6 00:08:52.977 IEEE OUI Identifier: 00 54 52 00:08:52.977 Multi-path I/O 00:08:52.977 May have multiple subsystem ports: No 00:08:52.977 May have multiple controllers: No 00:08:52.977 Associated with SR-IOV VF: No 00:08:52.977 Max Data Transfer Size: 524288 00:08:52.977 Max Number of Namespaces: 256 00:08:52.977 Max Number of I/O Queues: 64 00:08:52.977 NVMe Specification Version (VS): 1.4 00:08:52.977 NVMe Specification Version (Identify): 1.4 00:08:52.977 Maximum Queue Entries: 2048 00:08:52.977 Contiguous Queues Required: Yes 00:08:52.977 Arbitration Mechanisms Supported 00:08:52.977 Weighted Round Robin: Not Supported 00:08:52.977 Vendor Specific: Not Supported 00:08:52.977 Reset Timeout: 7500 ms 00:08:52.977 Doorbell Stride: 4 bytes 00:08:52.977 NVM Subsystem Reset: Not Supported 00:08:52.977 Command Sets Supported 00:08:52.977 NVM Command Set: Supported 00:08:52.977 Boot Partition: Not Supported 00:08:52.977 Memory Page Size Minimum: 4096 bytes 00:08:52.977 Memory Page Size Maximum: 65536 bytes 00:08:52.977 Persistent Memory Region: Not Supported 00:08:52.977 Optional Asynchronous Events Supported 00:08:52.977 Namespace Attribute Notices: Supported 00:08:52.977 Firmware Activation Notices: Not Supported 00:08:52.977 ANA Change Notices: Not Supported 00:08:52.977 PLE Aggregate Log Change Notices: Not Supported 00:08:52.977 LBA Status Info Alert Notices: Not Supported 00:08:52.977 EGE Aggregate Log Change Notices: Not Supported 00:08:52.977 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.977 Zone Descriptor Change Notices: Not Supported 00:08:52.977 Discovery Log Change Notices: Not Supported 00:08:52.977 Controller Attributes 00:08:52.977 128-bit Host Identifier: Not Supported 00:08:52.977 Non-Operational Permissive Mode: Not Supported 00:08:52.977 NVM Sets: Not Supported 00:08:52.977 Read Recovery Levels: Not Supported 00:08:52.977 Endurance Groups: Not Supported 00:08:52.977 Predictable Latency Mode: Not Supported 00:08:52.977 Traffic Based Keep ALive: Not Supported 00:08:52.977 Namespace Granularity: Not Supported 00:08:52.977 SQ Associations: Not Supported 00:08:52.977 UUID List: Not Supported 00:08:52.977 Multi-Domain Subsystem: Not Supported 00:08:52.977 Fixed Capacity Management: Not Supported 00:08:52.977 Variable Capacity Management: Not Supported 00:08:52.977 Delete Endurance Group: Not Supported 00:08:52.977 Delete NVM Set: Not Supported 00:08:52.977 Extended LBA Formats Supported: Supported 00:08:52.977 Flexible Data Placement Supported: Not Supported 00:08:52.977 00:08:52.977 Controller Memory Buffer Support 00:08:52.977 ================================ 00:08:52.977 Supported: No 00:08:52.977 00:08:52.977 Persistent Memory Region Support 00:08:52.977 ================================ 00:08:52.977 Supported: No 00:08:52.977 00:08:52.977 Admin Command Set Attributes 00:08:52.977 ============================ 00:08:52.977 Security Send/Receive: Not Supported 00:08:52.977 Format NVM: Supported 00:08:52.977 Firmware Activate/Download: Not Supported 00:08:52.977 Namespace Management: Supported 00:08:52.977 Device Self-Test: Not Supported 00:08:52.977 Directives: Supported 00:08:52.977 NVMe-MI: Not Supported 00:08:52.977 Virtualization Management: Not Supported 00:08:52.977 Doorbell Buffer Config: Supported 00:08:52.977 Get LBA Status Capability: Not Supported 00:08:52.977 Command & Feature Lockdown Capability: Not Supported 00:08:52.977 Abort Command Limit: 4 00:08:52.977 Async Event Request Limit: 4 00:08:52.977 Number of Firmware Slots: N/A 00:08:52.977 Firmware Slot 1 Read-Only: N/A 00:08:52.977 Firmware Activation Without Reset: N/A 00:08:52.977 Multiple Update Detection Support: N/A 00:08:52.977 Firmware Update Granularity: No Information Provided 00:08:52.977 Per-Namespace SMART Log: Yes 00:08:52.977 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.977 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:52.977 Command Effects Log Page: Supported 00:08:52.977 Get Log Page Extended Data: Supported 00:08:52.977 Telemetry Log Pages: Not Supported 00:08:52.977 Persistent Event Log Pages: Not Supported 00:08:52.977 Supported Log Pages Log Page: May Support 00:08:52.977 Commands Supported & Effects Log Page: Not Supported 00:08:52.977 Feature Identifiers & Effects Log Page:May Support 00:08:52.977 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.977 Data Area 4 for Telemetry Log: Not Supported 00:08:52.977 Error Log Page Entries Supported: 1 00:08:52.977 Keep Alive: Not Supported 00:08:52.977 00:08:52.977 NVM Command Set Attributes 00:08:52.977 ========================== 00:08:52.977 Submission Queue Entry Size 00:08:52.977 Max: 64 00:08:52.977 Min: 64 00:08:52.977 Completion Queue Entry Size 00:08:52.977 Max: 16 00:08:52.977 Min: 16 00:08:52.977 Number of Namespaces: 256 00:08:52.977 Compare Command: Supported 00:08:52.977 Write Uncorrectable Command: Not Supported 00:08:52.977 Dataset Management Command: Supported 00:08:52.977 Write Zeroes Command: Supported 00:08:52.977 Set Features Save Field: Supported 00:08:52.977 Reservations: Not Supported 00:08:52.977 Timestamp: Supported 00:08:52.977 Copy: Supported 00:08:52.977 Volatile Write Cache: Present 00:08:52.977 Atomic Write Unit (Normal): 1 00:08:52.977 Atomic Write Unit (PFail): 1 00:08:52.977 Atomic Compare & Write Unit: 1 00:08:52.977 Fused Compare & Write: Not Supported 00:08:52.977 Scatter-Gather List 00:08:52.977 SGL Command Set: Supported 00:08:52.977 SGL Keyed: Not Supported 00:08:52.977 SGL Bit Bucket Descriptor: Not Supported 00:08:52.977 SGL Metadata Pointer: Not Supported 00:08:52.977 Oversized SGL: Not Supported 00:08:52.977 SGL Metadata Address: Not Supported 00:08:52.977 SGL Offset: Not Supported 00:08:52.977 Transport SGL Data Block: Not Supported 00:08:52.977 Replay Protected Memory Block: Not Supported 00:08:52.977 00:08:52.977 Firmware Slot Information 00:08:52.977 ========================= 00:08:52.977 Active slot: 1 00:08:52.977 Slot 1 Firmware Revision: 1.0 00:08:52.977 00:08:52.977 00:08:52.977 Commands Supported and Effects 00:08:52.977 ============================== 00:08:52.977 Admin Commands 00:08:52.977 -------------- 00:08:52.977 Delete I/O Submission Queue (00h): Supported 00:08:52.977 Create I/O Submission Queue (01h): Supported 00:08:52.977 Get Log Page (02h): Supported 00:08:52.977 Delete I/O Completion Queue (04h): Supported 00:08:52.977 Create I/O Completion Queue (05h): Supported 00:08:52.977 Identify (06h): Supported 00:08:52.977 Abort (08h): Supported 00:08:52.977 Set Features (09h): Supported 00:08:52.977 Get Features (0Ah): Supported 00:08:52.977 Asynchronous Event Request (0Ch): Supported 00:08:52.977 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.977 Directive Send (19h): Supported 00:08:52.977 Directive Receive (1Ah): Supported 00:08:52.977 Virtualization Management (1Ch): Supported 00:08:52.977 Doorbell Buffer Config (7Ch): Supported 00:08:52.977 Format NVM (80h): Supported LBA-Change 00:08:52.977 I/O Commands 00:08:52.977 ------------ 00:08:52.977 Flush (00h): Supported LBA-Change 00:08:52.977 Write (01h): Supported LBA-Change 00:08:52.977 Read (02h): Supported 00:08:52.977 Compare (05h): Supported 00:08:52.977 Write Zeroes (08h): Supported LBA-Change 00:08:52.977 Dataset Management (09h): Supported LBA-Change 00:08:52.977 Unknown (0Ch): Supported 00:08:52.977 Unknown (12h): Supported 00:08:52.977 Copy (19h): Supported LBA-Change 00:08:52.977 Unknown (1Dh): Supported LBA-Change 00:08:52.977 00:08:52.977 Error Log 00:08:52.977 ========= 00:08:52.977 00:08:52.977 Arbitration 00:08:52.977 =========== 00:08:52.977 Arbitration Burst: no limit 00:08:52.977 00:08:52.977 Power Management 00:08:52.977 ================ 00:08:52.977 Number of Power States: 1 00:08:52.977 Current Power State: Power State #0 00:08:52.977 Power State #0: 00:08:52.977 Max Power: 25.00 W 00:08:52.977 Non-Operational State: Operational 00:08:52.977 Entry Latency: 16 microseconds 00:08:52.977 Exit Latency: 4 microseconds 00:08:52.977 Relative Read Throughput: 0 00:08:52.977 Relative Read Latency: 0 00:08:52.978 Relative Write Throughput: 0 00:08:52.978 Relative Write Latency: 0 00:08:52.978 Idle Power: Not Reported 00:08:52.978 Active Power: Not Reported 00:08:52.978 Non-Operational Permissive Mode: Not Supported 00:08:52.978 00:08:52.978 Health Information 00:08:52.978 ================== 00:08:52.978 Critical Warnings: 00:08:52.978 Available Spare Space: OK 00:08:52.978 Temperature: [2024-07-26 14:14:12.525064] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 67868 terminated unexpected 00:08:52.978 OK 00:08:52.978 Device Reliability: OK 00:08:52.978 Read Only: No 00:08:52.978 Volatile Memory Backup: OK 00:08:52.978 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.978 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.978 Available Spare: 0% 00:08:52.978 Available Spare Threshold: 0% 00:08:52.978 Life Percentage Used: 0% 00:08:52.978 Data Units Read: 1102 00:08:52.978 Data Units Written: 891 00:08:52.978 Host Read Commands: 49390 00:08:52.978 Host Write Commands: 46504 00:08:52.978 Controller Busy Time: 0 minutes 00:08:52.978 Power Cycles: 0 00:08:52.978 Power On Hours: 0 hours 00:08:52.978 Unsafe Shutdowns: 0 00:08:52.978 Unrecoverable Media Errors: 0 00:08:52.978 Lifetime Error Log Entries: 0 00:08:52.978 Warning Temperature Time: 0 minutes 00:08:52.978 Critical Temperature Time: 0 minutes 00:08:52.978 00:08:52.978 Number of Queues 00:08:52.978 ================ 00:08:52.978 Number of I/O Submission Queues: 64 00:08:52.978 Number of I/O Completion Queues: 64 00:08:52.978 00:08:52.978 ZNS Specific Controller Data 00:08:52.978 ============================ 00:08:52.978 Zone Append Size Limit: 0 00:08:52.978 00:08:52.978 00:08:52.978 Active Namespaces 00:08:52.978 ================= 00:08:52.978 Namespace ID:1 00:08:52.978 Error Recovery Timeout: Unlimited 00:08:52.978 Command Set Identifier: NVM (00h) 00:08:52.978 Deallocate: Supported 00:08:52.978 Deallocated/Unwritten Error: Supported 00:08:52.978 Deallocated Read Value: All 0x00 00:08:52.978 Deallocate in Write Zeroes: Not Supported 00:08:52.978 Deallocated Guard Field: 0xFFFF 00:08:52.978 Flush: Supported 00:08:52.978 Reservation: Not Supported 00:08:52.978 Namespace Sharing Capabilities: Private 00:08:52.978 Size (in LBAs): 1310720 (5GiB) 00:08:52.978 Capacity (in LBAs): 1310720 (5GiB) 00:08:52.978 Utilization (in LBAs): 1310720 (5GiB) 00:08:52.978 Thin Provisioning: Not Supported 00:08:52.978 Per-NS Atomic Units: No 00:08:52.978 Maximum Single Source Range Length: 128 00:08:52.978 Maximum Copy Length: 128 00:08:52.978 Maximum Source Range Count: 128 00:08:52.978 NGUID/EUI64 Never Reused: No 00:08:52.978 Namespace Write Protected: No 00:08:52.978 Number of LBA Formats: 8 00:08:52.978 Current LBA Format: LBA Format #04 00:08:52.978 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.978 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.978 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.978 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.978 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.978 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.978 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.978 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.978 00:08:52.978 NVM Specific Namespace Data 00:08:52.978 =========================== 00:08:52.978 Logical Block Storage Tag Mask: 0 00:08:52.978 Protection Information Capabilities: 00:08:52.978 16b Guard Protection Information Storage Tag Support: No 00:08:52.978 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.978 Storage Tag Check Read Support: No 00:08:52.978 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.978 ===================================================== 00:08:52.978 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.978 ===================================================== 00:08:52.978 Controller Capabilities/Features 00:08:52.978 ================================ 00:08:52.978 Vendor ID: 1b36 00:08:52.978 Subsystem Vendor ID: 1af4 00:08:52.978 Serial Number: 12343 00:08:52.978 Model Number: QEMU NVMe Ctrl 00:08:52.978 Firmware Version: 8.0.0 00:08:52.978 Recommended Arb Burst: 6 00:08:52.978 IEEE OUI Identifier: 00 54 52 00:08:52.978 Multi-path I/O 00:08:52.978 May have multiple subsystem ports: No 00:08:52.978 May have multiple controllers: Yes 00:08:52.978 Associated with SR-IOV VF: No 00:08:52.978 Max Data Transfer Size: 524288 00:08:52.978 Max Number of Namespaces: 256 00:08:52.978 Max Number of I/O Queues: 64 00:08:52.978 NVMe Specification Version (VS): 1.4 00:08:52.978 NVMe Specification Version (Identify): 1.4 00:08:52.978 Maximum Queue Entries: 2048 00:08:52.978 Contiguous Queues Required: Yes 00:08:52.978 Arbitration Mechanisms Supported 00:08:52.978 Weighted Round Robin: Not Supported 00:08:52.978 Vendor Specific: Not Supported 00:08:52.978 Reset Timeout: 7500 ms 00:08:52.978 Doorbell Stride: 4 bytes 00:08:52.978 NVM Subsystem Reset: Not Supported 00:08:52.978 Command Sets Supported 00:08:52.978 NVM Command Set: Supported 00:08:52.978 Boot Partition: Not Supported 00:08:52.978 Memory Page Size Minimum: 4096 bytes 00:08:52.978 Memory Page Size Maximum: 65536 bytes 00:08:52.978 Persistent Memory Region: Not Supported 00:08:52.978 Optional Asynchronous Events Supported 00:08:52.978 Namespace Attribute Notices: Supported 00:08:52.978 Firmware Activation Notices: Not Supported 00:08:52.978 ANA Change Notices: Not Supported 00:08:52.978 PLE Aggregate Log Change Notices: Not Supported 00:08:52.978 LBA Status Info Alert Notices: Not Supported 00:08:52.978 EGE Aggregate Log Change Notices: Not Supported 00:08:52.978 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.978 Zone Descriptor Change Notices: Not Supported 00:08:52.978 Discovery Log Change Notices: Not Supported 00:08:52.978 Controller Attributes 00:08:52.978 128-bit Host Identifier: Not Supported 00:08:52.978 Non-Operational Permissive Mode: Not Supported 00:08:52.978 NVM Sets: Not Supported 00:08:52.978 Read Recovery Levels: Not Supported 00:08:52.978 Endurance Groups: Supported 00:08:52.978 Predictable Latency Mode: Not Supported 00:08:52.978 Traffic Based Keep ALive: Not Supported 00:08:52.978 Namespace Granularity: Not Supported 00:08:52.978 SQ Associations: Not Supported 00:08:52.978 UUID List: Not Supported 00:08:52.978 Multi-Domain Subsystem: Not Supported 00:08:52.978 Fixed Capacity Management: Not Supported 00:08:52.978 Variable Capacity Management: Not Supported 00:08:52.978 Delete Endurance Group: Not Supported 00:08:52.978 Delete NVM Set: Not Supported 00:08:52.978 Extended LBA Formats Supported: Supported 00:08:52.978 Flexible Data Placement Supported: Supported 00:08:52.978 00:08:52.978 Controller Memory Buffer Support 00:08:52.978 ================================ 00:08:52.978 Supported: No 00:08:52.978 00:08:52.978 Persistent Memory Region Support 00:08:52.978 ================================ 00:08:52.978 Supported: No 00:08:52.978 00:08:52.978 Admin Command Set Attributes 00:08:52.978 ============================ 00:08:52.978 Security Send/Receive: Not Supported 00:08:52.978 Format NVM: Supported 00:08:52.978 Firmware Activate/Download: Not Supported 00:08:52.978 Namespace Management: Supported 00:08:52.978 Device Self-Test: Not Supported 00:08:52.978 Directives: Supported 00:08:52.978 NVMe-MI: Not Supported 00:08:52.978 Virtualization Management: Not Supported 00:08:52.978 Doorbell Buffer Config: Supported 00:08:52.978 Get LBA Status Capability: Not Supported 00:08:52.978 Command & Feature Lockdown Capability: Not Supported 00:08:52.978 Abort Command Limit: 4 00:08:52.978 Async Event Request Limit: 4 00:08:52.978 Number of Firmware Slots: N/A 00:08:52.978 Firmware Slot 1 Read-Only: N/A 00:08:52.978 Firmware Activation Without Reset: N/A 00:08:52.978 Multiple Update Detection Support: N/A 00:08:52.978 Firmware Update Granularity: No Information Provided 00:08:52.978 Per-Namespace SMART Log: Yes 00:08:52.978 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.978 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:52.978 Command Effects Log Page: Supported 00:08:52.978 Get Log Page Extended Data: Supported 00:08:52.978 Telemetry Log Pages: Not Supported 00:08:52.979 Persistent Event Log Pages: Not Supported 00:08:52.979 Supported Log Pages Log Page: May Support 00:08:52.979 Commands Supported & Effects Log Page: Not Supported 00:08:52.979 Feature Identifiers & Effects Log Page:May Support 00:08:52.979 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.979 Data Area 4 for Telemetry Log: Not Supported 00:08:52.979 Error Log Page Entries Supported: 1 00:08:52.979 Keep Alive: Not Supported 00:08:52.979 00:08:52.979 NVM Command Set Attributes 00:08:52.979 ========================== 00:08:52.979 Submission Queue Entry Size 00:08:52.979 Max: 64 00:08:52.979 Min: 64 00:08:52.979 Completion Queue Entry Size 00:08:52.979 Max: 16 00:08:52.979 Min: 16 00:08:52.979 Number of Namespaces: 256 00:08:52.979 Compare Command: Supported 00:08:52.979 Write Uncorrectable Command: Not Supported 00:08:52.979 Dataset Management Command: Supported 00:08:52.979 Write Zeroes Command: Supported 00:08:52.979 Set Features Save Field: Supported 00:08:52.979 Reservations: Not Supported 00:08:52.979 Timestamp: Supported 00:08:52.979 Copy: Supported 00:08:52.979 Volatile Write Cache: Present 00:08:52.979 Atomic Write Unit (Normal): 1 00:08:52.979 Atomic Write Unit (PFail): 1 00:08:52.979 Atomic Compare & Write Unit: 1 00:08:52.979 Fused Compare & Write: Not Supported 00:08:52.979 Scatter-Gather List 00:08:52.979 SGL Command Set: Supported 00:08:52.979 SGL Keyed: Not Supported 00:08:52.979 SGL Bit Bucket Descriptor: Not Supported 00:08:52.979 SGL Metadata Pointer: Not Supported 00:08:52.979 Oversized SGL: Not Supported 00:08:52.979 SGL Metadata Address: Not Supported 00:08:52.979 SGL Offset: Not Supported 00:08:52.979 Transport SGL Data Block: Not Supported 00:08:52.979 Replay Protected Memory Block: Not Supported 00:08:52.979 00:08:52.979 Firmware Slot Information 00:08:52.979 ========================= 00:08:52.979 Active slot: 1 00:08:52.979 Slot 1 Firmware Revision: 1.0 00:08:52.979 00:08:52.979 00:08:52.979 Commands Supported and Effects 00:08:52.979 ============================== 00:08:52.979 Admin Commands 00:08:52.979 -------------- 00:08:52.979 Delete I/O Submission Queue (00h): Supported 00:08:52.979 Create I/O Submission Queue (01h): Supported 00:08:52.979 Get Log Page (02h): Supported 00:08:52.979 Delete I/O Completion Queue (04h): Supported 00:08:52.979 Create I/O Completion Queue (05h): Supported 00:08:52.979 Identify (06h): Supported 00:08:52.979 Abort (08h): Supported 00:08:52.979 Set Features (09h): Supported 00:08:52.979 Get Features (0Ah): Supported 00:08:52.979 Asynchronous Event Request (0Ch): Supported 00:08:52.979 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.979 Directive Send (19h): Supported 00:08:52.979 Directive Receive (1Ah): Supported 00:08:52.979 Virtualization Management (1Ch): Supported 00:08:52.979 Doorbell Buffer Config (7Ch): Supported 00:08:52.979 Format NVM (80h): Supported LBA-Change 00:08:52.979 I/O Commands 00:08:52.979 ------------ 00:08:52.979 Flush (00h): Supported LBA-Change 00:08:52.979 Write (01h): Supported LBA-Change 00:08:52.979 Read (02h): Supported 00:08:52.979 Compare (05h): Supported 00:08:52.979 Write Zeroes (08h): Supported LBA-Change 00:08:52.979 Dataset Management (09h): Supported LBA-Change 00:08:52.979 Unknown (0Ch): Supported 00:08:52.979 Unknown (12h): Supported 00:08:52.979 Copy (19h): Supported LBA-Change 00:08:52.979 Unknown (1Dh): Supported LBA-Change 00:08:52.979 00:08:52.979 Error Log 00:08:52.979 ========= 00:08:52.979 00:08:52.979 Arbitration 00:08:52.979 =========== 00:08:52.979 Arbitration Burst: no limit 00:08:52.979 00:08:52.979 Power Management 00:08:52.979 ================ 00:08:52.979 Number of Power States: 1 00:08:52.979 Current Power State: Power State #0 00:08:52.979 Power State #0: 00:08:52.979 Max Power: 25.00 W 00:08:52.979 Non-Operational State: Operational 00:08:52.979 Entry Latency: 16 microseconds 00:08:52.979 Exit Latency: 4 microseconds 00:08:52.979 Relative Read Throughput: 0 00:08:52.979 Relative Read Latency: 0 00:08:52.979 Relative Write Throughput: 0 00:08:52.979 Relative Write Latency: 0 00:08:52.979 Idle Power: Not Reported 00:08:52.979 Active Power: Not Reported 00:08:52.979 Non-Operational Permissive Mode: Not Supported 00:08:52.979 00:08:52.979 Health Information 00:08:52.979 ================== 00:08:52.979 Critical Warnings: 00:08:52.979 Available Spare Space: OK 00:08:52.979 Temperature: OK 00:08:52.979 Device Reliability: OK 00:08:52.979 Read Only: No 00:08:52.979 Volatile Memory Backup: OK 00:08:52.979 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.979 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.979 Available Spare: 0% 00:08:52.979 Available Spare Threshold: 0% 00:08:52.979 Life Percentage Used: 0% 00:08:52.979 Data Units Read: 767 00:08:52.979 Data Units Written: 660 00:08:52.979 Host Read Commands: 33804 00:08:52.979 Host Write Commands: 32394 00:08:52.979 Controller Busy Time: 0 minutes 00:08:52.979 Power Cycles: 0 00:08:52.979 Power On Hours: 0 hours 00:08:52.979 Unsafe Shutdowns: 0 00:08:52.979 Unrecoverable Media Errors: 0 00:08:52.979 Lifetime Error Log Entries: 0 00:08:52.979 Warning Temperature Time: 0 minutes 00:08:52.979 Critical Temperature Time: 0 minutes 00:08:52.979 00:08:52.979 Number of Queues 00:08:52.979 ================ 00:08:52.979 Number of I/O Submission Queues: 64 00:08:52.979 Number of I/O Completion Queues: 64 00:08:52.979 00:08:52.979 ZNS Specific Controller Data 00:08:52.979 ============================ 00:08:52.979 Zone Append Size Limit: 0 00:08:52.979 00:08:52.979 00:08:52.979 Active Namespaces 00:08:52.979 ================= 00:08:52.979 Namespace ID:1 00:08:52.979 Error Recovery Timeout: Unlimited 00:08:52.979 Command Set Identifier: NVM (00h) 00:08:52.979 Deallocate: Supported 00:08:52.979 Deallocated/Unwritten Error: Supported 00:08:52.979 Deallocated Read Value: All 0x00 00:08:52.979 Deallocate in Write Zeroes: Not Supported 00:08:52.979 Deallocated Guard Field: 0xFFFF 00:08:52.979 Flush: Supported 00:08:52.979 Reservation: Not Supported 00:08:52.979 Namespace Sharing Capabilities: Multiple Controllers 00:08:52.979 Size (in LBAs): 262144 (1GiB) 00:08:52.979 Capacity (in LBAs): 262144 (1GiB) 00:08:52.979 Utilization (in LBAs): 262144 (1GiB) 00:08:52.979 Thin Provisioning: Not Supported 00:08:52.979 Per-NS Atomic Units: No 00:08:52.979 Maximum Single Source Range Length: 128 00:08:52.979 Maximum Copy Length: 128 00:08:52.979 Maximum Source Range Count: 128 00:08:52.979 NGUID/EUI64 Never Reused: No 00:08:52.979 Namespace Write Protected: No 00:08:52.979 Endurance group ID: 1 00:08:52.979 Number of LBA Formats: 8 00:08:52.979 Current LBA Format: LBA Format #04 00:08:52.979 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.979 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.979 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.979 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.979 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.979 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.979 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.979 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.979 00:08:52.979 Get Feature FDP: 00:08:52.979 ================ 00:08:52.979 Enabled: Yes 00:08:52.979 FDP configuration index: 0 00:08:52.979 00:08:52.979 FDP configurations log page 00:08:52.979 =========================== 00:08:52.979 Number of FDP configurations: 1 00:08:52.979 Version: 0 00:08:52.979 Size: 112 00:08:52.979 FDP Configuration Descriptor: 0 00:08:52.979 Descriptor Size: 96 00:08:52.979 Reclaim Group Identifier format: 2 00:08:52.979 FDP Volatile Write Cache: Not Present 00:08:52.979 FDP Configuration: Valid 00:08:52.979 Vendor Specific Size: 0 00:08:52.979 Number of Reclaim Groups: 2 00:08:52.979 Number of Recalim Unit Handles: 8 00:08:52.979 Max Placement Identifiers: 128 00:08:52.979 Number of Namespaces Suppprted: 256 00:08:52.979 Reclaim unit Nominal Size: 6000000 bytes 00:08:52.979 Estimated Reclaim Unit Time Limit: Not Reported 00:08:52.979 RUH Desc #000: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #001: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #002: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #003: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #004: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #005: RUH Type: Initially Isolated 00:08:52.979 RUH Desc #006: RUH Type: Initially Isolated 00:08:52.980 RUH Desc #007: RUH Type: Initially Isolated 00:08:52.980 00:08:52.980 FDP reclaim unit handle usage log page 00:08:52.980 ====================================== 00:08:52.980 Number of Reclaim Unit Handles: 8 00:08:52.980 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:52.980 RUH Usage Desc #001: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #002: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #003: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #004: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #005: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #006: RUH Attributes: Unused 00:08:52.980 RUH Usage Desc #007: RUH Attributes: Unused 00:08:52.980 00:08:52.980 FDP statistics log page 00:08:52.980 ======================= 00:08:52.980 Host bytes with metadata written: 422027264 00:08:52.980 Medi[2024-07-26 14:14:12.526822] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 67868 terminated unexpected 00:08:52.980 a bytes with metadata written: 422072320 00:08:52.980 Media bytes erased: 0 00:08:52.980 00:08:52.980 FDP events log page 00:08:52.980 =================== 00:08:52.980 Number of FDP events: 0 00:08:52.980 00:08:52.980 NVM Specific Namespace Data 00:08:52.980 =========================== 00:08:52.980 Logical Block Storage Tag Mask: 0 00:08:52.980 Protection Information Capabilities: 00:08:52.980 16b Guard Protection Information Storage Tag Support: No 00:08:52.980 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.980 Storage Tag Check Read Support: No 00:08:52.980 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.980 ===================================================== 00:08:52.980 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.980 ===================================================== 00:08:52.980 Controller Capabilities/Features 00:08:52.980 ================================ 00:08:52.980 Vendor ID: 1b36 00:08:52.980 Subsystem Vendor ID: 1af4 00:08:52.980 Serial Number: 12342 00:08:52.980 Model Number: QEMU NVMe Ctrl 00:08:52.980 Firmware Version: 8.0.0 00:08:52.980 Recommended Arb Burst: 6 00:08:52.980 IEEE OUI Identifier: 00 54 52 00:08:52.980 Multi-path I/O 00:08:52.980 May have multiple subsystem ports: No 00:08:52.980 May have multiple controllers: No 00:08:52.980 Associated with SR-IOV VF: No 00:08:52.980 Max Data Transfer Size: 524288 00:08:52.980 Max Number of Namespaces: 256 00:08:52.980 Max Number of I/O Queues: 64 00:08:52.980 NVMe Specification Version (VS): 1.4 00:08:52.980 NVMe Specification Version (Identify): 1.4 00:08:52.980 Maximum Queue Entries: 2048 00:08:52.980 Contiguous Queues Required: Yes 00:08:52.980 Arbitration Mechanisms Supported 00:08:52.980 Weighted Round Robin: Not Supported 00:08:52.980 Vendor Specific: Not Supported 00:08:52.980 Reset Timeout: 7500 ms 00:08:52.980 Doorbell Stride: 4 bytes 00:08:52.980 NVM Subsystem Reset: Not Supported 00:08:52.980 Command Sets Supported 00:08:52.980 NVM Command Set: Supported 00:08:52.980 Boot Partition: Not Supported 00:08:52.980 Memory Page Size Minimum: 4096 bytes 00:08:52.980 Memory Page Size Maximum: 65536 bytes 00:08:52.980 Persistent Memory Region: Not Supported 00:08:52.980 Optional Asynchronous Events Supported 00:08:52.980 Namespace Attribute Notices: Supported 00:08:52.980 Firmware Activation Notices: Not Supported 00:08:52.980 ANA Change Notices: Not Supported 00:08:52.980 PLE Aggregate Log Change Notices: Not Supported 00:08:52.980 LBA Status Info Alert Notices: Not Supported 00:08:52.980 EGE Aggregate Log Change Notices: Not Supported 00:08:52.980 Normal NVM Subsystem Shutdown event: Not Supported 00:08:52.980 Zone Descriptor Change Notices: Not Supported 00:08:52.980 Discovery Log Change Notices: Not Supported 00:08:52.980 Controller Attributes 00:08:52.980 128-bit Host Identifier: Not Supported 00:08:52.980 Non-Operational Permissive Mode: Not Supported 00:08:52.980 NVM Sets: Not Supported 00:08:52.980 Read Recovery Levels: Not Supported 00:08:52.980 Endurance Groups: Not Supported 00:08:52.980 Predictable Latency Mode: Not Supported 00:08:52.980 Traffic Based Keep ALive: Not Supported 00:08:52.980 Namespace Granularity: Not Supported 00:08:52.980 SQ Associations: Not Supported 00:08:52.980 UUID List: Not Supported 00:08:52.980 Multi-Domain Subsystem: Not Supported 00:08:52.980 Fixed Capacity Management: Not Supported 00:08:52.980 Variable Capacity Management: Not Supported 00:08:52.980 Delete Endurance Group: Not Supported 00:08:52.980 Delete NVM Set: Not Supported 00:08:52.980 Extended LBA Formats Supported: Supported 00:08:52.980 Flexible Data Placement Supported: Not Supported 00:08:52.980 00:08:52.980 Controller Memory Buffer Support 00:08:52.980 ================================ 00:08:52.980 Supported: No 00:08:52.980 00:08:52.980 Persistent Memory Region Support 00:08:52.980 ================================ 00:08:52.980 Supported: No 00:08:52.980 00:08:52.980 Admin Command Set Attributes 00:08:52.980 ============================ 00:08:52.980 Security Send/Receive: Not Supported 00:08:52.980 Format NVM: Supported 00:08:52.980 Firmware Activate/Download: Not Supported 00:08:52.980 Namespace Management: Supported 00:08:52.980 Device Self-Test: Not Supported 00:08:52.980 Directives: Supported 00:08:52.980 NVMe-MI: Not Supported 00:08:52.980 Virtualization Management: Not Supported 00:08:52.980 Doorbell Buffer Config: Supported 00:08:52.980 Get LBA Status Capability: Not Supported 00:08:52.980 Command & Feature Lockdown Capability: Not Supported 00:08:52.980 Abort Command Limit: 4 00:08:52.980 Async Event Request Limit: 4 00:08:52.980 Number of Firmware Slots: N/A 00:08:52.980 Firmware Slot 1 Read-Only: N/A 00:08:52.980 Firmware Activation Without Reset: N/A 00:08:52.980 Multiple Update Detection Support: N/A 00:08:52.980 Firmware Update Granularity: No Information Provided 00:08:52.980 Per-Namespace SMART Log: Yes 00:08:52.980 Asymmetric Namespace Access Log Page: Not Supported 00:08:52.980 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:52.980 Command Effects Log Page: Supported 00:08:52.980 Get Log Page Extended Data: Supported 00:08:52.980 Telemetry Log Pages: Not Supported 00:08:52.980 Persistent Event Log Pages: Not Supported 00:08:52.980 Supported Log Pages Log Page: May Support 00:08:52.980 Commands Supported & Effects Log Page: Not Supported 00:08:52.980 Feature Identifiers & Effects Log Page:May Support 00:08:52.980 NVMe-MI Commands & Effects Log Page: May Support 00:08:52.980 Data Area 4 for Telemetry Log: Not Supported 00:08:52.980 Error Log Page Entries Supported: 1 00:08:52.980 Keep Alive: Not Supported 00:08:52.980 00:08:52.980 NVM Command Set Attributes 00:08:52.980 ========================== 00:08:52.981 Submission Queue Entry Size 00:08:52.981 Max: 64 00:08:52.981 Min: 64 00:08:52.981 Completion Queue Entry Size 00:08:52.981 Max: 16 00:08:52.981 Min: 16 00:08:52.981 Number of Namespaces: 256 00:08:52.981 Compare Command: Supported 00:08:52.981 Write Uncorrectable Command: Not Supported 00:08:52.981 Dataset Management Command: Supported 00:08:52.981 Write Zeroes Command: Supported 00:08:52.981 Set Features Save Field: Supported 00:08:52.981 Reservations: Not Supported 00:08:52.981 Timestamp: Supported 00:08:52.981 Copy: Supported 00:08:52.981 Volatile Write Cache: Present 00:08:52.981 Atomic Write Unit (Normal): 1 00:08:52.981 Atomic Write Unit (PFail): 1 00:08:52.981 Atomic Compare & Write Unit: 1 00:08:52.981 Fused Compare & Write: Not Supported 00:08:52.981 Scatter-Gather List 00:08:52.981 SGL Command Set: Supported 00:08:52.981 SGL Keyed: Not Supported 00:08:52.981 SGL Bit Bucket Descriptor: Not Supported 00:08:52.981 SGL Metadata Pointer: Not Supported 00:08:52.981 Oversized SGL: Not Supported 00:08:52.981 SGL Metadata Address: Not Supported 00:08:52.981 SGL Offset: Not Supported 00:08:52.981 Transport SGL Data Block: Not Supported 00:08:52.981 Replay Protected Memory Block: Not Supported 00:08:52.981 00:08:52.981 Firmware Slot Information 00:08:52.981 ========================= 00:08:52.981 Active slot: 1 00:08:52.981 Slot 1 Firmware Revision: 1.0 00:08:52.981 00:08:52.981 00:08:52.981 Commands Supported and Effects 00:08:52.981 ============================== 00:08:52.981 Admin Commands 00:08:52.981 -------------- 00:08:52.981 Delete I/O Submission Queue (00h): Supported 00:08:52.981 Create I/O Submission Queue (01h): Supported 00:08:52.981 Get Log Page (02h): Supported 00:08:52.981 Delete I/O Completion Queue (04h): Supported 00:08:52.981 Create I/O Completion Queue (05h): Supported 00:08:52.981 Identify (06h): Supported 00:08:52.981 Abort (08h): Supported 00:08:52.981 Set Features (09h): Supported 00:08:52.981 Get Features (0Ah): Supported 00:08:52.981 Asynchronous Event Request (0Ch): Supported 00:08:52.981 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:52.981 Directive Send (19h): Supported 00:08:52.981 Directive Receive (1Ah): Supported 00:08:52.981 Virtualization Management (1Ch): Supported 00:08:52.981 Doorbell Buffer Config (7Ch): Supported 00:08:52.981 Format NVM (80h): Supported LBA-Change 00:08:52.981 I/O Commands 00:08:52.981 ------------ 00:08:52.981 Flush (00h): Supported LBA-Change 00:08:52.981 Write (01h): Supported LBA-Change 00:08:52.981 Read (02h): Supported 00:08:52.981 Compare (05h): Supported 00:08:52.981 Write Zeroes (08h): Supported LBA-Change 00:08:52.981 Dataset Management (09h): Supported LBA-Change 00:08:52.981 Unknown (0Ch): Supported 00:08:52.981 Unknown (12h): Supported 00:08:52.981 Copy (19h): Supported LBA-Change 00:08:52.981 Unknown (1Dh): Supported LBA-Change 00:08:52.981 00:08:52.981 Error Log 00:08:52.981 ========= 00:08:52.981 00:08:52.981 Arbitration 00:08:52.981 =========== 00:08:52.981 Arbitration Burst: no limit 00:08:52.981 00:08:52.981 Power Management 00:08:52.981 ================ 00:08:52.981 Number of Power States: 1 00:08:52.981 Current Power State: Power State #0 00:08:52.981 Power State #0: 00:08:52.981 Max Power: 25.00 W 00:08:52.981 Non-Operational State: Operational 00:08:52.981 Entry Latency: 16 microseconds 00:08:52.981 Exit Latency: 4 microseconds 00:08:52.981 Relative Read Throughput: 0 00:08:52.981 Relative Read Latency: 0 00:08:52.981 Relative Write Throughput: 0 00:08:52.981 Relative Write Latency: 0 00:08:52.981 Idle Power: Not Reported 00:08:52.981 Active Power: Not Reported 00:08:52.981 Non-Operational Permissive Mode: Not Supported 00:08:52.981 00:08:52.981 Health Information 00:08:52.981 ================== 00:08:52.981 Critical Warnings: 00:08:52.981 Available Spare Space: OK 00:08:52.981 Temperature: OK 00:08:52.981 Device Reliability: OK 00:08:52.981 Read Only: No 00:08:52.981 Volatile Memory Backup: OK 00:08:52.981 Current Temperature: 323 Kelvin (50 Celsius) 00:08:52.981 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:52.981 Available Spare: 0% 00:08:52.981 Available Spare Threshold: 0% 00:08:52.981 Life Percentage Used: 0% 00:08:52.981 Data Units Read: 2202 00:08:52.981 Data Units Written: 1882 00:08:52.981 Host Read Commands: 100047 00:08:52.981 Host Write Commands: 95817 00:08:52.981 Controller Busy Time: 0 minutes 00:08:52.981 Power Cycles: 0 00:08:52.981 Power On Hours: 0 hours 00:08:52.981 Unsafe Shutdowns: 0 00:08:52.981 Unrecoverable Media Errors: 0 00:08:52.981 Lifetime Error Log Entries: 0 00:08:52.981 Warning Temperature Time: 0 minutes 00:08:52.981 Critical Temperature Time: 0 minutes 00:08:52.981 00:08:52.981 Number of Queues 00:08:52.981 ================ 00:08:52.981 Number of I/O Submission Queues: 64 00:08:52.981 Number of I/O Completion Queues: 64 00:08:52.981 00:08:52.981 ZNS Specific Controller Data 00:08:52.981 ============================ 00:08:52.981 Zone Append Size Limit: 0 00:08:52.981 00:08:52.981 00:08:52.981 Active Namespaces 00:08:52.981 ================= 00:08:52.981 Namespace ID:1 00:08:52.981 Error Recovery Timeout: Unlimited 00:08:52.981 Command Set Identifier: NVM (00h) 00:08:52.981 Deallocate: Supported 00:08:52.981 Deallocated/Unwritten Error: Supported 00:08:52.981 Deallocated Read Value: All 0x00 00:08:52.981 Deallocate in Write Zeroes: Not Supported 00:08:52.981 Deallocated Guard Field: 0xFFFF 00:08:52.981 Flush: Supported 00:08:52.981 Reservation: Not Supported 00:08:52.981 Namespace Sharing Capabilities: Private 00:08:52.981 Size (in LBAs): 1048576 (4GiB) 00:08:52.981 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.981 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.981 Thin Provisioning: Not Supported 00:08:52.981 Per-NS Atomic Units: No 00:08:52.981 Maximum Single Source Range Length: 128 00:08:52.981 Maximum Copy Length: 128 00:08:52.981 Maximum Source Range Count: 128 00:08:52.981 NGUID/EUI64 Never Reused: No 00:08:52.981 Namespace Write Protected: No 00:08:52.981 Number of LBA Formats: 8 00:08:52.981 Current LBA Format: LBA Format #04 00:08:52.981 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.981 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.981 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.981 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.981 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.981 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.981 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.981 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.981 00:08:52.981 NVM Specific Namespace Data 00:08:52.981 =========================== 00:08:52.981 Logical Block Storage Tag Mask: 0 00:08:52.981 Protection Information Capabilities: 00:08:52.981 16b Guard Protection Information Storage Tag Support: No 00:08:52.981 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.981 Storage Tag Check Read Support: No 00:08:52.981 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.981 Namespace ID:2 00:08:52.981 Error Recovery Timeout: Unlimited 00:08:52.981 Command Set Identifier: NVM (00h) 00:08:52.981 Deallocate: Supported 00:08:52.981 Deallocated/Unwritten Error: Supported 00:08:52.981 Deallocated Read Value: All 0x00 00:08:52.981 Deallocate in Write Zeroes: Not Supported 00:08:52.981 Deallocated Guard Field: 0xFFFF 00:08:52.981 Flush: Supported 00:08:52.981 Reservation: Not Supported 00:08:52.981 Namespace Sharing Capabilities: Private 00:08:52.981 Size (in LBAs): 1048576 (4GiB) 00:08:52.981 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.981 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.981 Thin Provisioning: Not Supported 00:08:52.981 Per-NS Atomic Units: No 00:08:52.981 Maximum Single Source Range Length: 128 00:08:52.981 Maximum Copy Length: 128 00:08:52.981 Maximum Source Range Count: 128 00:08:52.981 NGUID/EUI64 Never Reused: No 00:08:52.981 Namespace Write Protected: No 00:08:52.982 Number of LBA Formats: 8 00:08:52.982 Current LBA Format: LBA Format #04 00:08:52.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.982 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.982 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.982 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.982 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.982 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.982 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.982 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.982 00:08:52.982 NVM Specific Namespace Data 00:08:52.982 =========================== 00:08:52.982 Logical Block Storage Tag Mask: 0 00:08:52.982 Protection Information Capabilities: 00:08:52.982 16b Guard Protection Information Storage Tag Support: No 00:08:52.982 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.982 Storage Tag Check Read Support: No 00:08:52.982 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Namespace ID:3 00:08:52.982 Error Recovery Timeout: Unlimited 00:08:52.982 Command Set Identifier: NVM (00h) 00:08:52.982 Deallocate: Supported 00:08:52.982 Deallocated/Unwritten Error: Supported 00:08:52.982 Deallocated Read Value: All 0x00 00:08:52.982 Deallocate in Write Zeroes: Not Supported 00:08:52.982 Deallocated Guard Field: 0xFFFF 00:08:52.982 Flush: Supported 00:08:52.982 Reservation: Not Supported 00:08:52.982 Namespace Sharing Capabilities: Private 00:08:52.982 Size (in LBAs): 1048576 (4GiB) 00:08:52.982 Capacity (in LBAs): 1048576 (4GiB) 00:08:52.982 Utilization (in LBAs): 1048576 (4GiB) 00:08:52.982 Thin Provisioning: Not Supported 00:08:52.982 Per-NS Atomic Units: No 00:08:52.982 Maximum Single Source Range Length: 128 00:08:52.982 Maximum Copy Length: 128 00:08:52.982 Maximum Source Range Count: 128 00:08:52.982 NGUID/EUI64 Never Reused: No 00:08:52.982 Namespace Write Protected: No 00:08:52.982 Number of LBA Formats: 8 00:08:52.982 Current LBA Format: LBA Format #04 00:08:52.982 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:52.982 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:52.982 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:52.982 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:52.982 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:52.982 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:52.982 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:52.982 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:52.982 00:08:52.982 NVM Specific Namespace Data 00:08:52.982 =========================== 00:08:52.982 Logical Block Storage Tag Mask: 0 00:08:52.982 Protection Information Capabilities: 00:08:52.982 16b Guard Protection Information Storage Tag Support: No 00:08:52.982 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:52.982 Storage Tag Check Read Support: No 00:08:52.982 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:52.982 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:52.982 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:53.242 ===================================================== 00:08:53.242 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.242 ===================================================== 00:08:53.242 Controller Capabilities/Features 00:08:53.242 ================================ 00:08:53.242 Vendor ID: 1b36 00:08:53.242 Subsystem Vendor ID: 1af4 00:08:53.242 Serial Number: 12340 00:08:53.242 Model Number: QEMU NVMe Ctrl 00:08:53.242 Firmware Version: 8.0.0 00:08:53.242 Recommended Arb Burst: 6 00:08:53.242 IEEE OUI Identifier: 00 54 52 00:08:53.242 Multi-path I/O 00:08:53.242 May have multiple subsystem ports: No 00:08:53.242 May have multiple controllers: No 00:08:53.242 Associated with SR-IOV VF: No 00:08:53.242 Max Data Transfer Size: 524288 00:08:53.242 Max Number of Namespaces: 256 00:08:53.242 Max Number of I/O Queues: 64 00:08:53.242 NVMe Specification Version (VS): 1.4 00:08:53.242 NVMe Specification Version (Identify): 1.4 00:08:53.242 Maximum Queue Entries: 2048 00:08:53.242 Contiguous Queues Required: Yes 00:08:53.242 Arbitration Mechanisms Supported 00:08:53.242 Weighted Round Robin: Not Supported 00:08:53.242 Vendor Specific: Not Supported 00:08:53.242 Reset Timeout: 7500 ms 00:08:53.242 Doorbell Stride: 4 bytes 00:08:53.242 NVM Subsystem Reset: Not Supported 00:08:53.242 Command Sets Supported 00:08:53.242 NVM Command Set: Supported 00:08:53.242 Boot Partition: Not Supported 00:08:53.242 Memory Page Size Minimum: 4096 bytes 00:08:53.242 Memory Page Size Maximum: 65536 bytes 00:08:53.242 Persistent Memory Region: Not Supported 00:08:53.242 Optional Asynchronous Events Supported 00:08:53.242 Namespace Attribute Notices: Supported 00:08:53.242 Firmware Activation Notices: Not Supported 00:08:53.242 ANA Change Notices: Not Supported 00:08:53.242 PLE Aggregate Log Change Notices: Not Supported 00:08:53.242 LBA Status Info Alert Notices: Not Supported 00:08:53.242 EGE Aggregate Log Change Notices: Not Supported 00:08:53.242 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.242 Zone Descriptor Change Notices: Not Supported 00:08:53.242 Discovery Log Change Notices: Not Supported 00:08:53.242 Controller Attributes 00:08:53.242 128-bit Host Identifier: Not Supported 00:08:53.242 Non-Operational Permissive Mode: Not Supported 00:08:53.242 NVM Sets: Not Supported 00:08:53.242 Read Recovery Levels: Not Supported 00:08:53.242 Endurance Groups: Not Supported 00:08:53.242 Predictable Latency Mode: Not Supported 00:08:53.242 Traffic Based Keep ALive: Not Supported 00:08:53.242 Namespace Granularity: Not Supported 00:08:53.242 SQ Associations: Not Supported 00:08:53.242 UUID List: Not Supported 00:08:53.242 Multi-Domain Subsystem: Not Supported 00:08:53.242 Fixed Capacity Management: Not Supported 00:08:53.242 Variable Capacity Management: Not Supported 00:08:53.242 Delete Endurance Group: Not Supported 00:08:53.242 Delete NVM Set: Not Supported 00:08:53.242 Extended LBA Formats Supported: Supported 00:08:53.242 Flexible Data Placement Supported: Not Supported 00:08:53.242 00:08:53.242 Controller Memory Buffer Support 00:08:53.242 ================================ 00:08:53.242 Supported: No 00:08:53.242 00:08:53.242 Persistent Memory Region Support 00:08:53.242 ================================ 00:08:53.242 Supported: No 00:08:53.242 00:08:53.242 Admin Command Set Attributes 00:08:53.242 ============================ 00:08:53.242 Security Send/Receive: Not Supported 00:08:53.242 Format NVM: Supported 00:08:53.242 Firmware Activate/Download: Not Supported 00:08:53.242 Namespace Management: Supported 00:08:53.242 Device Self-Test: Not Supported 00:08:53.242 Directives: Supported 00:08:53.242 NVMe-MI: Not Supported 00:08:53.242 Virtualization Management: Not Supported 00:08:53.242 Doorbell Buffer Config: Supported 00:08:53.242 Get LBA Status Capability: Not Supported 00:08:53.242 Command & Feature Lockdown Capability: Not Supported 00:08:53.242 Abort Command Limit: 4 00:08:53.242 Async Event Request Limit: 4 00:08:53.242 Number of Firmware Slots: N/A 00:08:53.242 Firmware Slot 1 Read-Only: N/A 00:08:53.242 Firmware Activation Without Reset: N/A 00:08:53.242 Multiple Update Detection Support: N/A 00:08:53.242 Firmware Update Granularity: No Information Provided 00:08:53.242 Per-Namespace SMART Log: Yes 00:08:53.242 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.242 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:53.242 Command Effects Log Page: Supported 00:08:53.242 Get Log Page Extended Data: Supported 00:08:53.242 Telemetry Log Pages: Not Supported 00:08:53.242 Persistent Event Log Pages: Not Supported 00:08:53.242 Supported Log Pages Log Page: May Support 00:08:53.242 Commands Supported & Effects Log Page: Not Supported 00:08:53.242 Feature Identifiers & Effects Log Page:May Support 00:08:53.242 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.242 Data Area 4 for Telemetry Log: Not Supported 00:08:53.242 Error Log Page Entries Supported: 1 00:08:53.242 Keep Alive: Not Supported 00:08:53.242 00:08:53.242 NVM Command Set Attributes 00:08:53.242 ========================== 00:08:53.242 Submission Queue Entry Size 00:08:53.242 Max: 64 00:08:53.242 Min: 64 00:08:53.242 Completion Queue Entry Size 00:08:53.242 Max: 16 00:08:53.242 Min: 16 00:08:53.242 Number of Namespaces: 256 00:08:53.242 Compare Command: Supported 00:08:53.242 Write Uncorrectable Command: Not Supported 00:08:53.242 Dataset Management Command: Supported 00:08:53.242 Write Zeroes Command: Supported 00:08:53.242 Set Features Save Field: Supported 00:08:53.242 Reservations: Not Supported 00:08:53.242 Timestamp: Supported 00:08:53.242 Copy: Supported 00:08:53.242 Volatile Write Cache: Present 00:08:53.242 Atomic Write Unit (Normal): 1 00:08:53.242 Atomic Write Unit (PFail): 1 00:08:53.242 Atomic Compare & Write Unit: 1 00:08:53.242 Fused Compare & Write: Not Supported 00:08:53.242 Scatter-Gather List 00:08:53.242 SGL Command Set: Supported 00:08:53.242 SGL Keyed: Not Supported 00:08:53.242 SGL Bit Bucket Descriptor: Not Supported 00:08:53.242 SGL Metadata Pointer: Not Supported 00:08:53.242 Oversized SGL: Not Supported 00:08:53.242 SGL Metadata Address: Not Supported 00:08:53.242 SGL Offset: Not Supported 00:08:53.242 Transport SGL Data Block: Not Supported 00:08:53.242 Replay Protected Memory Block: Not Supported 00:08:53.242 00:08:53.242 Firmware Slot Information 00:08:53.243 ========================= 00:08:53.243 Active slot: 1 00:08:53.243 Slot 1 Firmware Revision: 1.0 00:08:53.243 00:08:53.243 00:08:53.243 Commands Supported and Effects 00:08:53.243 ============================== 00:08:53.243 Admin Commands 00:08:53.243 -------------- 00:08:53.243 Delete I/O Submission Queue (00h): Supported 00:08:53.243 Create I/O Submission Queue (01h): Supported 00:08:53.243 Get Log Page (02h): Supported 00:08:53.243 Delete I/O Completion Queue (04h): Supported 00:08:53.243 Create I/O Completion Queue (05h): Supported 00:08:53.243 Identify (06h): Supported 00:08:53.243 Abort (08h): Supported 00:08:53.243 Set Features (09h): Supported 00:08:53.243 Get Features (0Ah): Supported 00:08:53.243 Asynchronous Event Request (0Ch): Supported 00:08:53.243 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.243 Directive Send (19h): Supported 00:08:53.243 Directive Receive (1Ah): Supported 00:08:53.243 Virtualization Management (1Ch): Supported 00:08:53.243 Doorbell Buffer Config (7Ch): Supported 00:08:53.243 Format NVM (80h): Supported LBA-Change 00:08:53.243 I/O Commands 00:08:53.243 ------------ 00:08:53.243 Flush (00h): Supported LBA-Change 00:08:53.243 Write (01h): Supported LBA-Change 00:08:53.243 Read (02h): Supported 00:08:53.243 Compare (05h): Supported 00:08:53.243 Write Zeroes (08h): Supported LBA-Change 00:08:53.243 Dataset Management (09h): Supported LBA-Change 00:08:53.243 Unknown (0Ch): Supported 00:08:53.243 Unknown (12h): Supported 00:08:53.243 Copy (19h): Supported LBA-Change 00:08:53.243 Unknown (1Dh): Supported LBA-Change 00:08:53.243 00:08:53.243 Error Log 00:08:53.243 ========= 00:08:53.243 00:08:53.243 Arbitration 00:08:53.243 =========== 00:08:53.243 Arbitration Burst: no limit 00:08:53.243 00:08:53.243 Power Management 00:08:53.243 ================ 00:08:53.243 Number of Power States: 1 00:08:53.243 Current Power State: Power State #0 00:08:53.243 Power State #0: 00:08:53.243 Max Power: 25.00 W 00:08:53.243 Non-Operational State: Operational 00:08:53.243 Entry Latency: 16 microseconds 00:08:53.243 Exit Latency: 4 microseconds 00:08:53.243 Relative Read Throughput: 0 00:08:53.243 Relative Read Latency: 0 00:08:53.243 Relative Write Throughput: 0 00:08:53.243 Relative Write Latency: 0 00:08:53.243 Idle Power: Not Reported 00:08:53.243 Active Power: Not Reported 00:08:53.243 Non-Operational Permissive Mode: Not Supported 00:08:53.243 00:08:53.243 Health Information 00:08:53.243 ================== 00:08:53.243 Critical Warnings: 00:08:53.243 Available Spare Space: OK 00:08:53.243 Temperature: OK 00:08:53.243 Device Reliability: OK 00:08:53.243 Read Only: No 00:08:53.243 Volatile Memory Backup: OK 00:08:53.243 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.243 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.243 Available Spare: 0% 00:08:53.243 Available Spare Threshold: 0% 00:08:53.243 Life Percentage Used: 0% 00:08:53.243 Data Units Read: 706 00:08:53.243 Data Units Written: 597 00:08:53.243 Host Read Commands: 32876 00:08:53.243 Host Write Commands: 31914 00:08:53.243 Controller Busy Time: 0 minutes 00:08:53.243 Power Cycles: 0 00:08:53.243 Power On Hours: 0 hours 00:08:53.243 Unsafe Shutdowns: 0 00:08:53.243 Unrecoverable Media Errors: 0 00:08:53.243 Lifetime Error Log Entries: 0 00:08:53.243 Warning Temperature Time: 0 minutes 00:08:53.243 Critical Temperature Time: 0 minutes 00:08:53.243 00:08:53.243 Number of Queues 00:08:53.243 ================ 00:08:53.243 Number of I/O Submission Queues: 64 00:08:53.243 Number of I/O Completion Queues: 64 00:08:53.243 00:08:53.243 ZNS Specific Controller Data 00:08:53.243 ============================ 00:08:53.243 Zone Append Size Limit: 0 00:08:53.243 00:08:53.243 00:08:53.243 Active Namespaces 00:08:53.243 ================= 00:08:53.243 Namespace ID:1 00:08:53.243 Error Recovery Timeout: Unlimited 00:08:53.243 Command Set Identifier: NVM (00h) 00:08:53.243 Deallocate: Supported 00:08:53.243 Deallocated/Unwritten Error: Supported 00:08:53.243 Deallocated Read Value: All 0x00 00:08:53.243 Deallocate in Write Zeroes: Not Supported 00:08:53.243 Deallocated Guard Field: 0xFFFF 00:08:53.243 Flush: Supported 00:08:53.243 Reservation: Not Supported 00:08:53.243 Metadata Transferred as: Separate Metadata Buffer 00:08:53.243 Namespace Sharing Capabilities: Private 00:08:53.243 Size (in LBAs): 1548666 (5GiB) 00:08:53.243 Capacity (in LBAs): 1548666 (5GiB) 00:08:53.243 Utilization (in LBAs): 1548666 (5GiB) 00:08:53.243 Thin Provisioning: Not Supported 00:08:53.243 Per-NS Atomic Units: No 00:08:53.243 Maximum Single Source Range Length: 128 00:08:53.243 Maximum Copy Length: 128 00:08:53.243 Maximum Source Range Count: 128 00:08:53.243 NGUID/EUI64 Never Reused: No 00:08:53.243 Namespace Write Protected: No 00:08:53.243 Number of LBA Formats: 8 00:08:53.243 Current LBA Format: LBA Format #07 00:08:53.243 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.243 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.243 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.243 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.243 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.243 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.243 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.243 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.243 00:08:53.243 NVM Specific Namespace Data 00:08:53.243 =========================== 00:08:53.243 Logical Block Storage Tag Mask: 0 00:08:53.243 Protection Information Capabilities: 00:08:53.243 16b Guard Protection Information Storage Tag Support: No 00:08:53.243 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.243 Storage Tag Check Read Support: No 00:08:53.243 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.243 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.243 14:14:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:53.503 ===================================================== 00:08:53.503 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.503 ===================================================== 00:08:53.503 Controller Capabilities/Features 00:08:53.503 ================================ 00:08:53.503 Vendor ID: 1b36 00:08:53.503 Subsystem Vendor ID: 1af4 00:08:53.503 Serial Number: 12341 00:08:53.503 Model Number: QEMU NVMe Ctrl 00:08:53.503 Firmware Version: 8.0.0 00:08:53.503 Recommended Arb Burst: 6 00:08:53.503 IEEE OUI Identifier: 00 54 52 00:08:53.503 Multi-path I/O 00:08:53.503 May have multiple subsystem ports: No 00:08:53.503 May have multiple controllers: No 00:08:53.503 Associated with SR-IOV VF: No 00:08:53.503 Max Data Transfer Size: 524288 00:08:53.503 Max Number of Namespaces: 256 00:08:53.503 Max Number of I/O Queues: 64 00:08:53.503 NVMe Specification Version (VS): 1.4 00:08:53.503 NVMe Specification Version (Identify): 1.4 00:08:53.503 Maximum Queue Entries: 2048 00:08:53.503 Contiguous Queues Required: Yes 00:08:53.503 Arbitration Mechanisms Supported 00:08:53.503 Weighted Round Robin: Not Supported 00:08:53.503 Vendor Specific: Not Supported 00:08:53.503 Reset Timeout: 7500 ms 00:08:53.503 Doorbell Stride: 4 bytes 00:08:53.503 NVM Subsystem Reset: Not Supported 00:08:53.503 Command Sets Supported 00:08:53.503 NVM Command Set: Supported 00:08:53.503 Boot Partition: Not Supported 00:08:53.503 Memory Page Size Minimum: 4096 bytes 00:08:53.503 Memory Page Size Maximum: 65536 bytes 00:08:53.503 Persistent Memory Region: Not Supported 00:08:53.503 Optional Asynchronous Events Supported 00:08:53.503 Namespace Attribute Notices: Supported 00:08:53.503 Firmware Activation Notices: Not Supported 00:08:53.503 ANA Change Notices: Not Supported 00:08:53.503 PLE Aggregate Log Change Notices: Not Supported 00:08:53.503 LBA Status Info Alert Notices: Not Supported 00:08:53.503 EGE Aggregate Log Change Notices: Not Supported 00:08:53.503 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.503 Zone Descriptor Change Notices: Not Supported 00:08:53.503 Discovery Log Change Notices: Not Supported 00:08:53.503 Controller Attributes 00:08:53.503 128-bit Host Identifier: Not Supported 00:08:53.503 Non-Operational Permissive Mode: Not Supported 00:08:53.503 NVM Sets: Not Supported 00:08:53.503 Read Recovery Levels: Not Supported 00:08:53.503 Endurance Groups: Not Supported 00:08:53.503 Predictable Latency Mode: Not Supported 00:08:53.503 Traffic Based Keep ALive: Not Supported 00:08:53.503 Namespace Granularity: Not Supported 00:08:53.503 SQ Associations: Not Supported 00:08:53.503 UUID List: Not Supported 00:08:53.503 Multi-Domain Subsystem: Not Supported 00:08:53.503 Fixed Capacity Management: Not Supported 00:08:53.503 Variable Capacity Management: Not Supported 00:08:53.503 Delete Endurance Group: Not Supported 00:08:53.503 Delete NVM Set: Not Supported 00:08:53.503 Extended LBA Formats Supported: Supported 00:08:53.503 Flexible Data Placement Supported: Not Supported 00:08:53.503 00:08:53.503 Controller Memory Buffer Support 00:08:53.503 ================================ 00:08:53.503 Supported: No 00:08:53.503 00:08:53.503 Persistent Memory Region Support 00:08:53.503 ================================ 00:08:53.503 Supported: No 00:08:53.503 00:08:53.503 Admin Command Set Attributes 00:08:53.503 ============================ 00:08:53.503 Security Send/Receive: Not Supported 00:08:53.503 Format NVM: Supported 00:08:53.503 Firmware Activate/Download: Not Supported 00:08:53.503 Namespace Management: Supported 00:08:53.503 Device Self-Test: Not Supported 00:08:53.503 Directives: Supported 00:08:53.503 NVMe-MI: Not Supported 00:08:53.503 Virtualization Management: Not Supported 00:08:53.503 Doorbell Buffer Config: Supported 00:08:53.503 Get LBA Status Capability: Not Supported 00:08:53.503 Command & Feature Lockdown Capability: Not Supported 00:08:53.504 Abort Command Limit: 4 00:08:53.504 Async Event Request Limit: 4 00:08:53.504 Number of Firmware Slots: N/A 00:08:53.504 Firmware Slot 1 Read-Only: N/A 00:08:53.504 Firmware Activation Without Reset: N/A 00:08:53.504 Multiple Update Detection Support: N/A 00:08:53.504 Firmware Update Granularity: No Information Provided 00:08:53.504 Per-Namespace SMART Log: Yes 00:08:53.504 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.504 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:53.504 Command Effects Log Page: Supported 00:08:53.504 Get Log Page Extended Data: Supported 00:08:53.504 Telemetry Log Pages: Not Supported 00:08:53.504 Persistent Event Log Pages: Not Supported 00:08:53.504 Supported Log Pages Log Page: May Support 00:08:53.504 Commands Supported & Effects Log Page: Not Supported 00:08:53.504 Feature Identifiers & Effects Log Page:May Support 00:08:53.504 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.504 Data Area 4 for Telemetry Log: Not Supported 00:08:53.504 Error Log Page Entries Supported: 1 00:08:53.504 Keep Alive: Not Supported 00:08:53.504 00:08:53.504 NVM Command Set Attributes 00:08:53.504 ========================== 00:08:53.504 Submission Queue Entry Size 00:08:53.504 Max: 64 00:08:53.504 Min: 64 00:08:53.504 Completion Queue Entry Size 00:08:53.504 Max: 16 00:08:53.504 Min: 16 00:08:53.504 Number of Namespaces: 256 00:08:53.504 Compare Command: Supported 00:08:53.504 Write Uncorrectable Command: Not Supported 00:08:53.504 Dataset Management Command: Supported 00:08:53.504 Write Zeroes Command: Supported 00:08:53.504 Set Features Save Field: Supported 00:08:53.504 Reservations: Not Supported 00:08:53.504 Timestamp: Supported 00:08:53.504 Copy: Supported 00:08:53.504 Volatile Write Cache: Present 00:08:53.504 Atomic Write Unit (Normal): 1 00:08:53.504 Atomic Write Unit (PFail): 1 00:08:53.504 Atomic Compare & Write Unit: 1 00:08:53.504 Fused Compare & Write: Not Supported 00:08:53.504 Scatter-Gather List 00:08:53.504 SGL Command Set: Supported 00:08:53.504 SGL Keyed: Not Supported 00:08:53.504 SGL Bit Bucket Descriptor: Not Supported 00:08:53.504 SGL Metadata Pointer: Not Supported 00:08:53.504 Oversized SGL: Not Supported 00:08:53.504 SGL Metadata Address: Not Supported 00:08:53.504 SGL Offset: Not Supported 00:08:53.504 Transport SGL Data Block: Not Supported 00:08:53.504 Replay Protected Memory Block: Not Supported 00:08:53.504 00:08:53.504 Firmware Slot Information 00:08:53.504 ========================= 00:08:53.504 Active slot: 1 00:08:53.504 Slot 1 Firmware Revision: 1.0 00:08:53.504 00:08:53.504 00:08:53.504 Commands Supported and Effects 00:08:53.504 ============================== 00:08:53.504 Admin Commands 00:08:53.504 -------------- 00:08:53.504 Delete I/O Submission Queue (00h): Supported 00:08:53.504 Create I/O Submission Queue (01h): Supported 00:08:53.504 Get Log Page (02h): Supported 00:08:53.504 Delete I/O Completion Queue (04h): Supported 00:08:53.504 Create I/O Completion Queue (05h): Supported 00:08:53.504 Identify (06h): Supported 00:08:53.504 Abort (08h): Supported 00:08:53.504 Set Features (09h): Supported 00:08:53.504 Get Features (0Ah): Supported 00:08:53.504 Asynchronous Event Request (0Ch): Supported 00:08:53.504 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.504 Directive Send (19h): Supported 00:08:53.504 Directive Receive (1Ah): Supported 00:08:53.504 Virtualization Management (1Ch): Supported 00:08:53.504 Doorbell Buffer Config (7Ch): Supported 00:08:53.504 Format NVM (80h): Supported LBA-Change 00:08:53.504 I/O Commands 00:08:53.504 ------------ 00:08:53.504 Flush (00h): Supported LBA-Change 00:08:53.504 Write (01h): Supported LBA-Change 00:08:53.504 Read (02h): Supported 00:08:53.504 Compare (05h): Supported 00:08:53.504 Write Zeroes (08h): Supported LBA-Change 00:08:53.504 Dataset Management (09h): Supported LBA-Change 00:08:53.504 Unknown (0Ch): Supported 00:08:53.504 Unknown (12h): Supported 00:08:53.504 Copy (19h): Supported LBA-Change 00:08:53.504 Unknown (1Dh): Supported LBA-Change 00:08:53.504 00:08:53.504 Error Log 00:08:53.504 ========= 00:08:53.504 00:08:53.504 Arbitration 00:08:53.504 =========== 00:08:53.504 Arbitration Burst: no limit 00:08:53.504 00:08:53.504 Power Management 00:08:53.504 ================ 00:08:53.504 Number of Power States: 1 00:08:53.504 Current Power State: Power State #0 00:08:53.504 Power State #0: 00:08:53.504 Max Power: 25.00 W 00:08:53.504 Non-Operational State: Operational 00:08:53.504 Entry Latency: 16 microseconds 00:08:53.504 Exit Latency: 4 microseconds 00:08:53.504 Relative Read Throughput: 0 00:08:53.504 Relative Read Latency: 0 00:08:53.504 Relative Write Throughput: 0 00:08:53.504 Relative Write Latency: 0 00:08:53.504 Idle Power: Not Reported 00:08:53.504 Active Power: Not Reported 00:08:53.504 Non-Operational Permissive Mode: Not Supported 00:08:53.504 00:08:53.504 Health Information 00:08:53.504 ================== 00:08:53.504 Critical Warnings: 00:08:53.504 Available Spare Space: OK 00:08:53.504 Temperature: OK 00:08:53.504 Device Reliability: OK 00:08:53.504 Read Only: No 00:08:53.504 Volatile Memory Backup: OK 00:08:53.504 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.504 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.504 Available Spare: 0% 00:08:53.504 Available Spare Threshold: 0% 00:08:53.504 Life Percentage Used: 0% 00:08:53.504 Data Units Read: 1102 00:08:53.504 Data Units Written: 891 00:08:53.504 Host Read Commands: 49390 00:08:53.504 Host Write Commands: 46504 00:08:53.504 Controller Busy Time: 0 minutes 00:08:53.504 Power Cycles: 0 00:08:53.504 Power On Hours: 0 hours 00:08:53.504 Unsafe Shutdowns: 0 00:08:53.504 Unrecoverable Media Errors: 0 00:08:53.504 Lifetime Error Log Entries: 0 00:08:53.504 Warning Temperature Time: 0 minutes 00:08:53.504 Critical Temperature Time: 0 minutes 00:08:53.504 00:08:53.504 Number of Queues 00:08:53.504 ================ 00:08:53.504 Number of I/O Submission Queues: 64 00:08:53.504 Number of I/O Completion Queues: 64 00:08:53.504 00:08:53.504 ZNS Specific Controller Data 00:08:53.504 ============================ 00:08:53.504 Zone Append Size Limit: 0 00:08:53.504 00:08:53.504 00:08:53.504 Active Namespaces 00:08:53.504 ================= 00:08:53.504 Namespace ID:1 00:08:53.504 Error Recovery Timeout: Unlimited 00:08:53.504 Command Set Identifier: NVM (00h) 00:08:53.504 Deallocate: Supported 00:08:53.504 Deallocated/Unwritten Error: Supported 00:08:53.504 Deallocated Read Value: All 0x00 00:08:53.504 Deallocate in Write Zeroes: Not Supported 00:08:53.504 Deallocated Guard Field: 0xFFFF 00:08:53.504 Flush: Supported 00:08:53.504 Reservation: Not Supported 00:08:53.504 Namespace Sharing Capabilities: Private 00:08:53.504 Size (in LBAs): 1310720 (5GiB) 00:08:53.504 Capacity (in LBAs): 1310720 (5GiB) 00:08:53.504 Utilization (in LBAs): 1310720 (5GiB) 00:08:53.504 Thin Provisioning: Not Supported 00:08:53.504 Per-NS Atomic Units: No 00:08:53.504 Maximum Single Source Range Length: 128 00:08:53.504 Maximum Copy Length: 128 00:08:53.504 Maximum Source Range Count: 128 00:08:53.504 NGUID/EUI64 Never Reused: No 00:08:53.504 Namespace Write Protected: No 00:08:53.504 Number of LBA Formats: 8 00:08:53.504 Current LBA Format: LBA Format #04 00:08:53.504 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.504 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.504 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.504 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.504 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.504 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.504 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.504 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.504 00:08:53.504 NVM Specific Namespace Data 00:08:53.504 =========================== 00:08:53.504 Logical Block Storage Tag Mask: 0 00:08:53.504 Protection Information Capabilities: 00:08:53.504 16b Guard Protection Information Storage Tag Support: No 00:08:53.504 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.504 Storage Tag Check Read Support: No 00:08:53.504 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.504 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.504 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.504 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.504 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.505 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.505 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.505 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.505 14:14:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.505 14:14:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:53.764 ===================================================== 00:08:53.764 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.764 ===================================================== 00:08:53.764 Controller Capabilities/Features 00:08:53.764 ================================ 00:08:53.764 Vendor ID: 1b36 00:08:53.764 Subsystem Vendor ID: 1af4 00:08:53.764 Serial Number: 12342 00:08:53.764 Model Number: QEMU NVMe Ctrl 00:08:53.764 Firmware Version: 8.0.0 00:08:53.764 Recommended Arb Burst: 6 00:08:53.764 IEEE OUI Identifier: 00 54 52 00:08:53.764 Multi-path I/O 00:08:53.764 May have multiple subsystem ports: No 00:08:53.764 May have multiple controllers: No 00:08:53.764 Associated with SR-IOV VF: No 00:08:53.764 Max Data Transfer Size: 524288 00:08:53.764 Max Number of Namespaces: 256 00:08:53.764 Max Number of I/O Queues: 64 00:08:53.764 NVMe Specification Version (VS): 1.4 00:08:53.764 NVMe Specification Version (Identify): 1.4 00:08:53.764 Maximum Queue Entries: 2048 00:08:53.764 Contiguous Queues Required: Yes 00:08:53.764 Arbitration Mechanisms Supported 00:08:53.764 Weighted Round Robin: Not Supported 00:08:53.764 Vendor Specific: Not Supported 00:08:53.764 Reset Timeout: 7500 ms 00:08:53.765 Doorbell Stride: 4 bytes 00:08:53.765 NVM Subsystem Reset: Not Supported 00:08:53.765 Command Sets Supported 00:08:53.765 NVM Command Set: Supported 00:08:53.765 Boot Partition: Not Supported 00:08:53.765 Memory Page Size Minimum: 4096 bytes 00:08:53.765 Memory Page Size Maximum: 65536 bytes 00:08:53.765 Persistent Memory Region: Not Supported 00:08:53.765 Optional Asynchronous Events Supported 00:08:53.765 Namespace Attribute Notices: Supported 00:08:53.765 Firmware Activation Notices: Not Supported 00:08:53.765 ANA Change Notices: Not Supported 00:08:53.765 PLE Aggregate Log Change Notices: Not Supported 00:08:53.765 LBA Status Info Alert Notices: Not Supported 00:08:53.765 EGE Aggregate Log Change Notices: Not Supported 00:08:53.765 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.765 Zone Descriptor Change Notices: Not Supported 00:08:53.765 Discovery Log Change Notices: Not Supported 00:08:53.765 Controller Attributes 00:08:53.765 128-bit Host Identifier: Not Supported 00:08:53.765 Non-Operational Permissive Mode: Not Supported 00:08:53.765 NVM Sets: Not Supported 00:08:53.765 Read Recovery Levels: Not Supported 00:08:53.765 Endurance Groups: Not Supported 00:08:53.765 Predictable Latency Mode: Not Supported 00:08:53.765 Traffic Based Keep ALive: Not Supported 00:08:53.765 Namespace Granularity: Not Supported 00:08:53.765 SQ Associations: Not Supported 00:08:53.765 UUID List: Not Supported 00:08:53.765 Multi-Domain Subsystem: Not Supported 00:08:53.765 Fixed Capacity Management: Not Supported 00:08:53.765 Variable Capacity Management: Not Supported 00:08:53.765 Delete Endurance Group: Not Supported 00:08:53.765 Delete NVM Set: Not Supported 00:08:53.765 Extended LBA Formats Supported: Supported 00:08:53.765 Flexible Data Placement Supported: Not Supported 00:08:53.765 00:08:53.765 Controller Memory Buffer Support 00:08:53.765 ================================ 00:08:53.765 Supported: No 00:08:53.765 00:08:53.765 Persistent Memory Region Support 00:08:53.765 ================================ 00:08:53.765 Supported: No 00:08:53.765 00:08:53.765 Admin Command Set Attributes 00:08:53.765 ============================ 00:08:53.765 Security Send/Receive: Not Supported 00:08:53.765 Format NVM: Supported 00:08:53.765 Firmware Activate/Download: Not Supported 00:08:53.765 Namespace Management: Supported 00:08:53.765 Device Self-Test: Not Supported 00:08:53.765 Directives: Supported 00:08:53.765 NVMe-MI: Not Supported 00:08:53.765 Virtualization Management: Not Supported 00:08:53.765 Doorbell Buffer Config: Supported 00:08:53.765 Get LBA Status Capability: Not Supported 00:08:53.765 Command & Feature Lockdown Capability: Not Supported 00:08:53.765 Abort Command Limit: 4 00:08:53.765 Async Event Request Limit: 4 00:08:53.765 Number of Firmware Slots: N/A 00:08:53.765 Firmware Slot 1 Read-Only: N/A 00:08:53.765 Firmware Activation Without Reset: N/A 00:08:53.765 Multiple Update Detection Support: N/A 00:08:53.765 Firmware Update Granularity: No Information Provided 00:08:53.765 Per-Namespace SMART Log: Yes 00:08:53.765 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.765 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:53.765 Command Effects Log Page: Supported 00:08:53.765 Get Log Page Extended Data: Supported 00:08:53.765 Telemetry Log Pages: Not Supported 00:08:53.765 Persistent Event Log Pages: Not Supported 00:08:53.765 Supported Log Pages Log Page: May Support 00:08:53.765 Commands Supported & Effects Log Page: Not Supported 00:08:53.765 Feature Identifiers & Effects Log Page:May Support 00:08:53.765 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.765 Data Area 4 for Telemetry Log: Not Supported 00:08:53.765 Error Log Page Entries Supported: 1 00:08:53.765 Keep Alive: Not Supported 00:08:53.765 00:08:53.765 NVM Command Set Attributes 00:08:53.765 ========================== 00:08:53.765 Submission Queue Entry Size 00:08:53.765 Max: 64 00:08:53.765 Min: 64 00:08:53.765 Completion Queue Entry Size 00:08:53.765 Max: 16 00:08:53.765 Min: 16 00:08:53.765 Number of Namespaces: 256 00:08:53.765 Compare Command: Supported 00:08:53.765 Write Uncorrectable Command: Not Supported 00:08:53.765 Dataset Management Command: Supported 00:08:53.765 Write Zeroes Command: Supported 00:08:53.765 Set Features Save Field: Supported 00:08:53.765 Reservations: Not Supported 00:08:53.765 Timestamp: Supported 00:08:53.765 Copy: Supported 00:08:53.765 Volatile Write Cache: Present 00:08:53.765 Atomic Write Unit (Normal): 1 00:08:53.765 Atomic Write Unit (PFail): 1 00:08:53.765 Atomic Compare & Write Unit: 1 00:08:53.765 Fused Compare & Write: Not Supported 00:08:53.765 Scatter-Gather List 00:08:53.765 SGL Command Set: Supported 00:08:53.765 SGL Keyed: Not Supported 00:08:53.765 SGL Bit Bucket Descriptor: Not Supported 00:08:53.765 SGL Metadata Pointer: Not Supported 00:08:53.765 Oversized SGL: Not Supported 00:08:53.765 SGL Metadata Address: Not Supported 00:08:53.765 SGL Offset: Not Supported 00:08:53.765 Transport SGL Data Block: Not Supported 00:08:53.765 Replay Protected Memory Block: Not Supported 00:08:53.765 00:08:53.765 Firmware Slot Information 00:08:53.765 ========================= 00:08:53.765 Active slot: 1 00:08:53.765 Slot 1 Firmware Revision: 1.0 00:08:53.765 00:08:53.765 00:08:53.765 Commands Supported and Effects 00:08:53.765 ============================== 00:08:53.765 Admin Commands 00:08:53.765 -------------- 00:08:53.765 Delete I/O Submission Queue (00h): Supported 00:08:53.765 Create I/O Submission Queue (01h): Supported 00:08:53.765 Get Log Page (02h): Supported 00:08:53.765 Delete I/O Completion Queue (04h): Supported 00:08:53.765 Create I/O Completion Queue (05h): Supported 00:08:53.765 Identify (06h): Supported 00:08:53.765 Abort (08h): Supported 00:08:53.765 Set Features (09h): Supported 00:08:53.765 Get Features (0Ah): Supported 00:08:53.765 Asynchronous Event Request (0Ch): Supported 00:08:53.765 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.765 Directive Send (19h): Supported 00:08:53.765 Directive Receive (1Ah): Supported 00:08:53.765 Virtualization Management (1Ch): Supported 00:08:53.765 Doorbell Buffer Config (7Ch): Supported 00:08:53.765 Format NVM (80h): Supported LBA-Change 00:08:53.765 I/O Commands 00:08:53.765 ------------ 00:08:53.765 Flush (00h): Supported LBA-Change 00:08:53.765 Write (01h): Supported LBA-Change 00:08:53.765 Read (02h): Supported 00:08:53.765 Compare (05h): Supported 00:08:53.765 Write Zeroes (08h): Supported LBA-Change 00:08:53.765 Dataset Management (09h): Supported LBA-Change 00:08:53.765 Unknown (0Ch): Supported 00:08:53.765 Unknown (12h): Supported 00:08:53.765 Copy (19h): Supported LBA-Change 00:08:53.765 Unknown (1Dh): Supported LBA-Change 00:08:53.765 00:08:53.765 Error Log 00:08:53.765 ========= 00:08:53.765 00:08:53.765 Arbitration 00:08:53.765 =========== 00:08:53.765 Arbitration Burst: no limit 00:08:53.765 00:08:53.765 Power Management 00:08:53.765 ================ 00:08:53.765 Number of Power States: 1 00:08:53.765 Current Power State: Power State #0 00:08:53.765 Power State #0: 00:08:53.765 Max Power: 25.00 W 00:08:53.765 Non-Operational State: Operational 00:08:53.765 Entry Latency: 16 microseconds 00:08:53.765 Exit Latency: 4 microseconds 00:08:53.765 Relative Read Throughput: 0 00:08:53.765 Relative Read Latency: 0 00:08:53.765 Relative Write Throughput: 0 00:08:53.765 Relative Write Latency: 0 00:08:53.765 Idle Power: Not Reported 00:08:53.765 Active Power: Not Reported 00:08:53.765 Non-Operational Permissive Mode: Not Supported 00:08:53.765 00:08:53.765 Health Information 00:08:53.765 ================== 00:08:53.765 Critical Warnings: 00:08:53.765 Available Spare Space: OK 00:08:53.765 Temperature: OK 00:08:53.765 Device Reliability: OK 00:08:53.765 Read Only: No 00:08:53.765 Volatile Memory Backup: OK 00:08:53.765 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.765 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.765 Available Spare: 0% 00:08:53.765 Available Spare Threshold: 0% 00:08:53.765 Life Percentage Used: 0% 00:08:53.765 Data Units Read: 2202 00:08:53.765 Data Units Written: 1882 00:08:53.765 Host Read Commands: 100047 00:08:53.765 Host Write Commands: 95817 00:08:53.765 Controller Busy Time: 0 minutes 00:08:53.765 Power Cycles: 0 00:08:53.765 Power On Hours: 0 hours 00:08:53.765 Unsafe Shutdowns: 0 00:08:53.765 Unrecoverable Media Errors: 0 00:08:53.765 Lifetime Error Log Entries: 0 00:08:53.765 Warning Temperature Time: 0 minutes 00:08:53.765 Critical Temperature Time: 0 minutes 00:08:53.766 00:08:53.766 Number of Queues 00:08:53.766 ================ 00:08:53.766 Number of I/O Submission Queues: 64 00:08:53.766 Number of I/O Completion Queues: 64 00:08:53.766 00:08:53.766 ZNS Specific Controller Data 00:08:53.766 ============================ 00:08:53.766 Zone Append Size Limit: 0 00:08:53.766 00:08:53.766 00:08:53.766 Active Namespaces 00:08:53.766 ================= 00:08:53.766 Namespace ID:1 00:08:53.766 Error Recovery Timeout: Unlimited 00:08:53.766 Command Set Identifier: NVM (00h) 00:08:53.766 Deallocate: Supported 00:08:53.766 Deallocated/Unwritten Error: Supported 00:08:53.766 Deallocated Read Value: All 0x00 00:08:53.766 Deallocate in Write Zeroes: Not Supported 00:08:53.766 Deallocated Guard Field: 0xFFFF 00:08:53.766 Flush: Supported 00:08:53.766 Reservation: Not Supported 00:08:53.766 Namespace Sharing Capabilities: Private 00:08:53.766 Size (in LBAs): 1048576 (4GiB) 00:08:53.766 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.766 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.766 Thin Provisioning: Not Supported 00:08:53.766 Per-NS Atomic Units: No 00:08:53.766 Maximum Single Source Range Length: 128 00:08:53.766 Maximum Copy Length: 128 00:08:53.766 Maximum Source Range Count: 128 00:08:53.766 NGUID/EUI64 Never Reused: No 00:08:53.766 Namespace Write Protected: No 00:08:53.766 Number of LBA Formats: 8 00:08:53.766 Current LBA Format: LBA Format #04 00:08:53.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.766 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.766 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.766 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.766 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.766 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.766 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.766 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.766 00:08:53.766 NVM Specific Namespace Data 00:08:53.766 =========================== 00:08:53.766 Logical Block Storage Tag Mask: 0 00:08:53.766 Protection Information Capabilities: 00:08:53.766 16b Guard Protection Information Storage Tag Support: No 00:08:53.766 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.766 Storage Tag Check Read Support: No 00:08:53.766 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Namespace ID:2 00:08:53.766 Error Recovery Timeout: Unlimited 00:08:53.766 Command Set Identifier: NVM (00h) 00:08:53.766 Deallocate: Supported 00:08:53.766 Deallocated/Unwritten Error: Supported 00:08:53.766 Deallocated Read Value: All 0x00 00:08:53.766 Deallocate in Write Zeroes: Not Supported 00:08:53.766 Deallocated Guard Field: 0xFFFF 00:08:53.766 Flush: Supported 00:08:53.766 Reservation: Not Supported 00:08:53.766 Namespace Sharing Capabilities: Private 00:08:53.766 Size (in LBAs): 1048576 (4GiB) 00:08:53.766 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.766 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.766 Thin Provisioning: Not Supported 00:08:53.766 Per-NS Atomic Units: No 00:08:53.766 Maximum Single Source Range Length: 128 00:08:53.766 Maximum Copy Length: 128 00:08:53.766 Maximum Source Range Count: 128 00:08:53.766 NGUID/EUI64 Never Reused: No 00:08:53.766 Namespace Write Protected: No 00:08:53.766 Number of LBA Formats: 8 00:08:53.766 Current LBA Format: LBA Format #04 00:08:53.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.766 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.766 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.766 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.766 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.766 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.766 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.766 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.766 00:08:53.766 NVM Specific Namespace Data 00:08:53.766 =========================== 00:08:53.766 Logical Block Storage Tag Mask: 0 00:08:53.766 Protection Information Capabilities: 00:08:53.766 16b Guard Protection Information Storage Tag Support: No 00:08:53.766 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.766 Storage Tag Check Read Support: No 00:08:53.766 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Namespace ID:3 00:08:53.766 Error Recovery Timeout: Unlimited 00:08:53.766 Command Set Identifier: NVM (00h) 00:08:53.766 Deallocate: Supported 00:08:53.766 Deallocated/Unwritten Error: Supported 00:08:53.766 Deallocated Read Value: All 0x00 00:08:53.766 Deallocate in Write Zeroes: Not Supported 00:08:53.766 Deallocated Guard Field: 0xFFFF 00:08:53.766 Flush: Supported 00:08:53.766 Reservation: Not Supported 00:08:53.766 Namespace Sharing Capabilities: Private 00:08:53.766 Size (in LBAs): 1048576 (4GiB) 00:08:53.766 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.766 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.766 Thin Provisioning: Not Supported 00:08:53.766 Per-NS Atomic Units: No 00:08:53.766 Maximum Single Source Range Length: 128 00:08:53.766 Maximum Copy Length: 128 00:08:53.766 Maximum Source Range Count: 128 00:08:53.766 NGUID/EUI64 Never Reused: No 00:08:53.766 Namespace Write Protected: No 00:08:53.766 Number of LBA Formats: 8 00:08:53.766 Current LBA Format: LBA Format #04 00:08:53.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.766 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.766 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.766 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.766 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.766 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.766 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.766 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.766 00:08:53.766 NVM Specific Namespace Data 00:08:53.766 =========================== 00:08:53.766 Logical Block Storage Tag Mask: 0 00:08:53.766 Protection Information Capabilities: 00:08:53.766 16b Guard Protection Information Storage Tag Support: No 00:08:53.766 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.766 Storage Tag Check Read Support: No 00:08:53.766 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.766 14:14:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.766 14:14:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:54.026 ===================================================== 00:08:54.026 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.026 ===================================================== 00:08:54.026 Controller Capabilities/Features 00:08:54.026 ================================ 00:08:54.026 Vendor ID: 1b36 00:08:54.026 Subsystem Vendor ID: 1af4 00:08:54.026 Serial Number: 12343 00:08:54.026 Model Number: QEMU NVMe Ctrl 00:08:54.026 Firmware Version: 8.0.0 00:08:54.026 Recommended Arb Burst: 6 00:08:54.026 IEEE OUI Identifier: 00 54 52 00:08:54.026 Multi-path I/O 00:08:54.026 May have multiple subsystem ports: No 00:08:54.026 May have multiple controllers: Yes 00:08:54.026 Associated with SR-IOV VF: No 00:08:54.026 Max Data Transfer Size: 524288 00:08:54.026 Max Number of Namespaces: 256 00:08:54.026 Max Number of I/O Queues: 64 00:08:54.026 NVMe Specification Version (VS): 1.4 00:08:54.026 NVMe Specification Version (Identify): 1.4 00:08:54.026 Maximum Queue Entries: 2048 00:08:54.026 Contiguous Queues Required: Yes 00:08:54.026 Arbitration Mechanisms Supported 00:08:54.026 Weighted Round Robin: Not Supported 00:08:54.026 Vendor Specific: Not Supported 00:08:54.026 Reset Timeout: 7500 ms 00:08:54.026 Doorbell Stride: 4 bytes 00:08:54.026 NVM Subsystem Reset: Not Supported 00:08:54.026 Command Sets Supported 00:08:54.026 NVM Command Set: Supported 00:08:54.026 Boot Partition: Not Supported 00:08:54.026 Memory Page Size Minimum: 4096 bytes 00:08:54.026 Memory Page Size Maximum: 65536 bytes 00:08:54.026 Persistent Memory Region: Not Supported 00:08:54.026 Optional Asynchronous Events Supported 00:08:54.026 Namespace Attribute Notices: Supported 00:08:54.026 Firmware Activation Notices: Not Supported 00:08:54.026 ANA Change Notices: Not Supported 00:08:54.026 PLE Aggregate Log Change Notices: Not Supported 00:08:54.026 LBA Status Info Alert Notices: Not Supported 00:08:54.026 EGE Aggregate Log Change Notices: Not Supported 00:08:54.026 Normal NVM Subsystem Shutdown event: Not Supported 00:08:54.026 Zone Descriptor Change Notices: Not Supported 00:08:54.026 Discovery Log Change Notices: Not Supported 00:08:54.026 Controller Attributes 00:08:54.026 128-bit Host Identifier: Not Supported 00:08:54.026 Non-Operational Permissive Mode: Not Supported 00:08:54.026 NVM Sets: Not Supported 00:08:54.026 Read Recovery Levels: Not Supported 00:08:54.026 Endurance Groups: Supported 00:08:54.026 Predictable Latency Mode: Not Supported 00:08:54.026 Traffic Based Keep ALive: Not Supported 00:08:54.026 Namespace Granularity: Not Supported 00:08:54.026 SQ Associations: Not Supported 00:08:54.026 UUID List: Not Supported 00:08:54.026 Multi-Domain Subsystem: Not Supported 00:08:54.026 Fixed Capacity Management: Not Supported 00:08:54.026 Variable Capacity Management: Not Supported 00:08:54.026 Delete Endurance Group: Not Supported 00:08:54.026 Delete NVM Set: Not Supported 00:08:54.026 Extended LBA Formats Supported: Supported 00:08:54.026 Flexible Data Placement Supported: Supported 00:08:54.026 00:08:54.026 Controller Memory Buffer Support 00:08:54.026 ================================ 00:08:54.026 Supported: No 00:08:54.026 00:08:54.026 Persistent Memory Region Support 00:08:54.026 ================================ 00:08:54.026 Supported: No 00:08:54.026 00:08:54.026 Admin Command Set Attributes 00:08:54.026 ============================ 00:08:54.026 Security Send/Receive: Not Supported 00:08:54.026 Format NVM: Supported 00:08:54.026 Firmware Activate/Download: Not Supported 00:08:54.026 Namespace Management: Supported 00:08:54.026 Device Self-Test: Not Supported 00:08:54.026 Directives: Supported 00:08:54.026 NVMe-MI: Not Supported 00:08:54.026 Virtualization Management: Not Supported 00:08:54.026 Doorbell Buffer Config: Supported 00:08:54.026 Get LBA Status Capability: Not Supported 00:08:54.026 Command & Feature Lockdown Capability: Not Supported 00:08:54.026 Abort Command Limit: 4 00:08:54.026 Async Event Request Limit: 4 00:08:54.026 Number of Firmware Slots: N/A 00:08:54.026 Firmware Slot 1 Read-Only: N/A 00:08:54.026 Firmware Activation Without Reset: N/A 00:08:54.026 Multiple Update Detection Support: N/A 00:08:54.026 Firmware Update Granularity: No Information Provided 00:08:54.026 Per-Namespace SMART Log: Yes 00:08:54.026 Asymmetric Namespace Access Log Page: Not Supported 00:08:54.026 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:54.026 Command Effects Log Page: Supported 00:08:54.026 Get Log Page Extended Data: Supported 00:08:54.026 Telemetry Log Pages: Not Supported 00:08:54.026 Persistent Event Log Pages: Not Supported 00:08:54.026 Supported Log Pages Log Page: May Support 00:08:54.026 Commands Supported & Effects Log Page: Not Supported 00:08:54.026 Feature Identifiers & Effects Log Page:May Support 00:08:54.026 NVMe-MI Commands & Effects Log Page: May Support 00:08:54.026 Data Area 4 for Telemetry Log: Not Supported 00:08:54.026 Error Log Page Entries Supported: 1 00:08:54.026 Keep Alive: Not Supported 00:08:54.026 00:08:54.026 NVM Command Set Attributes 00:08:54.026 ========================== 00:08:54.026 Submission Queue Entry Size 00:08:54.026 Max: 64 00:08:54.026 Min: 64 00:08:54.026 Completion Queue Entry Size 00:08:54.026 Max: 16 00:08:54.027 Min: 16 00:08:54.027 Number of Namespaces: 256 00:08:54.027 Compare Command: Supported 00:08:54.027 Write Uncorrectable Command: Not Supported 00:08:54.027 Dataset Management Command: Supported 00:08:54.027 Write Zeroes Command: Supported 00:08:54.027 Set Features Save Field: Supported 00:08:54.027 Reservations: Not Supported 00:08:54.027 Timestamp: Supported 00:08:54.027 Copy: Supported 00:08:54.027 Volatile Write Cache: Present 00:08:54.027 Atomic Write Unit (Normal): 1 00:08:54.027 Atomic Write Unit (PFail): 1 00:08:54.027 Atomic Compare & Write Unit: 1 00:08:54.027 Fused Compare & Write: Not Supported 00:08:54.027 Scatter-Gather List 00:08:54.027 SGL Command Set: Supported 00:08:54.027 SGL Keyed: Not Supported 00:08:54.027 SGL Bit Bucket Descriptor: Not Supported 00:08:54.027 SGL Metadata Pointer: Not Supported 00:08:54.027 Oversized SGL: Not Supported 00:08:54.027 SGL Metadata Address: Not Supported 00:08:54.027 SGL Offset: Not Supported 00:08:54.027 Transport SGL Data Block: Not Supported 00:08:54.027 Replay Protected Memory Block: Not Supported 00:08:54.027 00:08:54.027 Firmware Slot Information 00:08:54.027 ========================= 00:08:54.027 Active slot: 1 00:08:54.027 Slot 1 Firmware Revision: 1.0 00:08:54.027 00:08:54.027 00:08:54.027 Commands Supported and Effects 00:08:54.027 ============================== 00:08:54.027 Admin Commands 00:08:54.027 -------------- 00:08:54.027 Delete I/O Submission Queue (00h): Supported 00:08:54.027 Create I/O Submission Queue (01h): Supported 00:08:54.027 Get Log Page (02h): Supported 00:08:54.027 Delete I/O Completion Queue (04h): Supported 00:08:54.027 Create I/O Completion Queue (05h): Supported 00:08:54.027 Identify (06h): Supported 00:08:54.027 Abort (08h): Supported 00:08:54.027 Set Features (09h): Supported 00:08:54.027 Get Features (0Ah): Supported 00:08:54.027 Asynchronous Event Request (0Ch): Supported 00:08:54.027 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:54.027 Directive Send (19h): Supported 00:08:54.027 Directive Receive (1Ah): Supported 00:08:54.027 Virtualization Management (1Ch): Supported 00:08:54.027 Doorbell Buffer Config (7Ch): Supported 00:08:54.027 Format NVM (80h): Supported LBA-Change 00:08:54.027 I/O Commands 00:08:54.027 ------------ 00:08:54.027 Flush (00h): Supported LBA-Change 00:08:54.027 Write (01h): Supported LBA-Change 00:08:54.027 Read (02h): Supported 00:08:54.027 Compare (05h): Supported 00:08:54.027 Write Zeroes (08h): Supported LBA-Change 00:08:54.027 Dataset Management (09h): Supported LBA-Change 00:08:54.027 Unknown (0Ch): Supported 00:08:54.027 Unknown (12h): Supported 00:08:54.027 Copy (19h): Supported LBA-Change 00:08:54.027 Unknown (1Dh): Supported LBA-Change 00:08:54.027 00:08:54.027 Error Log 00:08:54.027 ========= 00:08:54.027 00:08:54.027 Arbitration 00:08:54.027 =========== 00:08:54.027 Arbitration Burst: no limit 00:08:54.027 00:08:54.027 Power Management 00:08:54.027 ================ 00:08:54.027 Number of Power States: 1 00:08:54.027 Current Power State: Power State #0 00:08:54.027 Power State #0: 00:08:54.027 Max Power: 25.00 W 00:08:54.027 Non-Operational State: Operational 00:08:54.027 Entry Latency: 16 microseconds 00:08:54.027 Exit Latency: 4 microseconds 00:08:54.027 Relative Read Throughput: 0 00:08:54.027 Relative Read Latency: 0 00:08:54.027 Relative Write Throughput: 0 00:08:54.027 Relative Write Latency: 0 00:08:54.027 Idle Power: Not Reported 00:08:54.027 Active Power: Not Reported 00:08:54.027 Non-Operational Permissive Mode: Not Supported 00:08:54.027 00:08:54.027 Health Information 00:08:54.027 ================== 00:08:54.027 Critical Warnings: 00:08:54.027 Available Spare Space: OK 00:08:54.027 Temperature: OK 00:08:54.027 Device Reliability: OK 00:08:54.027 Read Only: No 00:08:54.027 Volatile Memory Backup: OK 00:08:54.027 Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.027 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:54.027 Available Spare: 0% 00:08:54.027 Available Spare Threshold: 0% 00:08:54.027 Life Percentage Used: 0% 00:08:54.027 Data Units Read: 767 00:08:54.027 Data Units Written: 660 00:08:54.027 Host Read Commands: 33804 00:08:54.027 Host Write Commands: 32394 00:08:54.027 Controller Busy Time: 0 minutes 00:08:54.027 Power Cycles: 0 00:08:54.027 Power On Hours: 0 hours 00:08:54.027 Unsafe Shutdowns: 0 00:08:54.027 Unrecoverable Media Errors: 0 00:08:54.027 Lifetime Error Log Entries: 0 00:08:54.027 Warning Temperature Time: 0 minutes 00:08:54.027 Critical Temperature Time: 0 minutes 00:08:54.027 00:08:54.027 Number of Queues 00:08:54.027 ================ 00:08:54.027 Number of I/O Submission Queues: 64 00:08:54.027 Number of I/O Completion Queues: 64 00:08:54.027 00:08:54.027 ZNS Specific Controller Data 00:08:54.027 ============================ 00:08:54.027 Zone Append Size Limit: 0 00:08:54.027 00:08:54.027 00:08:54.027 Active Namespaces 00:08:54.027 ================= 00:08:54.027 Namespace ID:1 00:08:54.027 Error Recovery Timeout: Unlimited 00:08:54.027 Command Set Identifier: NVM (00h) 00:08:54.027 Deallocate: Supported 00:08:54.027 Deallocated/Unwritten Error: Supported 00:08:54.027 Deallocated Read Value: All 0x00 00:08:54.027 Deallocate in Write Zeroes: Not Supported 00:08:54.027 Deallocated Guard Field: 0xFFFF 00:08:54.027 Flush: Supported 00:08:54.027 Reservation: Not Supported 00:08:54.027 Namespace Sharing Capabilities: Multiple Controllers 00:08:54.027 Size (in LBAs): 262144 (1GiB) 00:08:54.027 Capacity (in LBAs): 262144 (1GiB) 00:08:54.027 Utilization (in LBAs): 262144 (1GiB) 00:08:54.027 Thin Provisioning: Not Supported 00:08:54.027 Per-NS Atomic Units: No 00:08:54.027 Maximum Single Source Range Length: 128 00:08:54.027 Maximum Copy Length: 128 00:08:54.027 Maximum Source Range Count: 128 00:08:54.027 NGUID/EUI64 Never Reused: No 00:08:54.027 Namespace Write Protected: No 00:08:54.027 Endurance group ID: 1 00:08:54.027 Number of LBA Formats: 8 00:08:54.027 Current LBA Format: LBA Format #04 00:08:54.027 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:54.027 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:54.027 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:54.027 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:54.027 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:54.027 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:54.027 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:54.027 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:54.027 00:08:54.027 Get Feature FDP: 00:08:54.027 ================ 00:08:54.027 Enabled: Yes 00:08:54.027 FDP configuration index: 0 00:08:54.027 00:08:54.027 FDP configurations log page 00:08:54.027 =========================== 00:08:54.027 Number of FDP configurations: 1 00:08:54.027 Version: 0 00:08:54.027 Size: 112 00:08:54.027 FDP Configuration Descriptor: 0 00:08:54.027 Descriptor Size: 96 00:08:54.027 Reclaim Group Identifier format: 2 00:08:54.027 FDP Volatile Write Cache: Not Present 00:08:54.027 FDP Configuration: Valid 00:08:54.027 Vendor Specific Size: 0 00:08:54.027 Number of Reclaim Groups: 2 00:08:54.027 Number of Recalim Unit Handles: 8 00:08:54.027 Max Placement Identifiers: 128 00:08:54.027 Number of Namespaces Suppprted: 256 00:08:54.027 Reclaim unit Nominal Size: 6000000 bytes 00:08:54.027 Estimated Reclaim Unit Time Limit: Not Reported 00:08:54.027 RUH Desc #000: RUH Type: Initially Isolated 00:08:54.027 RUH Desc #001: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #002: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #003: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #004: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #005: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #006: RUH Type: Initially Isolated 00:08:54.028 RUH Desc #007: RUH Type: Initially Isolated 00:08:54.028 00:08:54.028 FDP reclaim unit handle usage log page 00:08:54.287 ====================================== 00:08:54.287 Number of Reclaim Unit Handles: 8 00:08:54.287 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:54.287 RUH Usage Desc #001: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #002: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #003: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #004: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #005: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #006: RUH Attributes: Unused 00:08:54.287 RUH Usage Desc #007: RUH Attributes: Unused 00:08:54.287 00:08:54.287 FDP statistics log page 00:08:54.287 ======================= 00:08:54.287 Host bytes with metadata written: 422027264 00:08:54.287 Media bytes with metadata written: 422072320 00:08:54.287 Media bytes erased: 0 00:08:54.287 00:08:54.287 FDP events log page 00:08:54.287 =================== 00:08:54.287 Number of FDP events: 0 00:08:54.287 00:08:54.287 NVM Specific Namespace Data 00:08:54.287 =========================== 00:08:54.287 Logical Block Storage Tag Mask: 0 00:08:54.287 Protection Information Capabilities: 00:08:54.287 16b Guard Protection Information Storage Tag Support: No 00:08:54.287 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:54.287 Storage Tag Check Read Support: No 00:08:54.287 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.287 00:08:54.287 real 0m1.635s 00:08:54.287 user 0m0.674s 00:08:54.287 sys 0m0.756s 00:08:54.287 ************************************ 00:08:54.287 END TEST nvme_identify 00:08:54.287 ************************************ 00:08:54.287 14:14:13 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.287 14:14:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:54.287 14:14:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:54.287 14:14:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.287 14:14:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.287 14:14:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.287 ************************************ 00:08:54.287 START TEST nvme_perf 00:08:54.287 ************************************ 00:08:54.287 14:14:13 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:54.287 14:14:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:55.669 Initializing NVMe Controllers 00:08:55.669 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:55.669 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:55.669 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:55.669 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:55.669 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:55.669 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:55.669 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:55.669 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:55.669 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:55.669 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:55.669 Initialization complete. Launching workers. 00:08:55.669 ======================================================== 00:08:55.669 Latency(us) 00:08:55.669 Device Information : IOPS MiB/s Average min max 00:08:55.669 PCIE (0000:00:10.0) NSID 1 from core 0: 13564.19 158.96 9445.21 7846.78 43803.77 00:08:55.669 PCIE (0000:00:11.0) NSID 1 from core 0: 13564.19 158.96 9421.59 7923.64 41049.40 00:08:55.669 PCIE (0000:00:13.0) NSID 1 from core 0: 13564.19 158.96 9395.94 7865.38 38811.32 00:08:55.669 PCIE (0000:00:12.0) NSID 1 from core 0: 13564.19 158.96 9369.59 7887.27 36012.71 00:08:55.669 PCIE (0000:00:12.0) NSID 2 from core 0: 13564.19 158.96 9343.32 7876.03 33327.13 00:08:55.669 PCIE (0000:00:12.0) NSID 3 from core 0: 13564.19 158.96 9316.94 7934.69 30683.47 00:08:55.669 ======================================================== 00:08:55.669 Total : 81385.13 953.73 9382.10 7846.78 43803.77 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8102.633us 00:08:55.669 10.00000% : 8460.102us 00:08:55.669 25.00000% : 8698.415us 00:08:55.669 50.00000% : 9115.462us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10247.447us 00:08:55.669 95.00000% : 10545.338us 00:08:55.669 98.00000% : 10902.807us 00:08:55.669 99.00000% : 11856.058us 00:08:55.669 99.50000% : 36223.535us 00:08:55.669 99.90000% : 43372.916us 00:08:55.669 99.99000% : 43849.542us 00:08:55.669 99.99900% : 43849.542us 00:08:55.669 99.99990% : 43849.542us 00:08:55.669 99.99999% : 43849.542us 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8162.211us 00:08:55.669 10.00000% : 8519.680us 00:08:55.669 25.00000% : 8757.993us 00:08:55.669 50.00000% : 9055.884us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10187.869us 00:08:55.669 95.00000% : 10426.182us 00:08:55.669 98.00000% : 10724.073us 00:08:55.669 99.00000% : 11200.698us 00:08:55.669 99.50000% : 34078.720us 00:08:55.669 99.90000% : 40751.476us 00:08:55.669 99.99000% : 41228.102us 00:08:55.669 99.99900% : 41228.102us 00:08:55.669 99.99990% : 41228.102us 00:08:55.669 99.99999% : 41228.102us 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8162.211us 00:08:55.669 10.00000% : 8519.680us 00:08:55.669 25.00000% : 8698.415us 00:08:55.669 50.00000% : 9055.884us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10187.869us 00:08:55.669 95.00000% : 10485.760us 00:08:55.669 98.00000% : 10724.073us 00:08:55.669 99.00000% : 11200.698us 00:08:55.669 99.50000% : 31695.593us 00:08:55.669 99.90000% : 38368.349us 00:08:55.669 99.99000% : 38844.975us 00:08:55.669 99.99900% : 38844.975us 00:08:55.669 99.99990% : 38844.975us 00:08:55.669 99.99999% : 38844.975us 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8162.211us 00:08:55.669 10.00000% : 8519.680us 00:08:55.669 25.00000% : 8757.993us 00:08:55.669 50.00000% : 9055.884us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10187.869us 00:08:55.669 95.00000% : 10426.182us 00:08:55.669 98.00000% : 10724.073us 00:08:55.669 99.00000% : 11200.698us 00:08:55.669 99.50000% : 29074.153us 00:08:55.669 99.90000% : 35746.909us 00:08:55.669 99.99000% : 35985.222us 00:08:55.669 99.99900% : 36223.535us 00:08:55.669 99.99990% : 36223.535us 00:08:55.669 99.99999% : 36223.535us 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8162.211us 00:08:55.669 10.00000% : 8519.680us 00:08:55.669 25.00000% : 8698.415us 00:08:55.669 50.00000% : 9055.884us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10187.869us 00:08:55.669 95.00000% : 10426.182us 00:08:55.669 98.00000% : 10724.073us 00:08:55.669 99.00000% : 11260.276us 00:08:55.669 99.50000% : 26214.400us 00:08:55.669 99.90000% : 32887.156us 00:08:55.669 99.99000% : 33363.782us 00:08:55.669 99.99900% : 33363.782us 00:08:55.669 99.99990% : 33363.782us 00:08:55.669 99.99999% : 33363.782us 00:08:55.669 00:08:55.669 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.669 ================================================================================= 00:08:55.669 1.00000% : 8162.211us 00:08:55.669 10.00000% : 8519.680us 00:08:55.669 25.00000% : 8698.415us 00:08:55.669 50.00000% : 9055.884us 00:08:55.669 75.00000% : 9592.087us 00:08:55.669 90.00000% : 10187.869us 00:08:55.669 95.00000% : 10426.182us 00:08:55.669 98.00000% : 10724.073us 00:08:55.669 99.00000% : 11260.276us 00:08:55.669 99.50000% : 23592.960us 00:08:55.669 99.90000% : 30265.716us 00:08:55.669 99.99000% : 30742.342us 00:08:55.669 99.99900% : 30742.342us 00:08:55.669 99.99990% : 30742.342us 00:08:55.669 99.99999% : 30742.342us 00:08:55.669 00:08:55.669 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:55.669 ============================================================================== 00:08:55.669 Range in us Cumulative IO count 00:08:55.669 7804.742 - 7864.320: 0.0074% ( 1) 00:08:55.669 7864.320 - 7923.898: 0.1106% ( 14) 00:08:55.669 7923.898 - 7983.476: 0.2874% ( 24) 00:08:55.669 7983.476 - 8043.055: 0.5823% ( 40) 00:08:55.669 8043.055 - 8102.633: 1.1055% ( 71) 00:08:55.669 8102.633 - 8162.211: 1.9163% ( 110) 00:08:55.669 8162.211 - 8221.789: 3.0881% ( 159) 00:08:55.669 8221.789 - 8281.367: 4.7022% ( 219) 00:08:55.669 8281.367 - 8340.945: 6.9281% ( 302) 00:08:55.669 8340.945 - 8400.524: 9.6698% ( 372) 00:08:55.669 8400.524 - 8460.102: 12.8022% ( 425) 00:08:55.669 8460.102 - 8519.680: 16.1335% ( 452) 00:08:55.669 8519.680 - 8579.258: 19.4723% ( 453) 00:08:55.669 8579.258 - 8638.836: 23.0469% ( 485) 00:08:55.669 8638.836 - 8698.415: 26.5920% ( 481) 00:08:55.669 8698.415 - 8757.993: 30.1666% ( 485) 00:08:55.669 8757.993 - 8817.571: 33.9475% ( 513) 00:08:55.669 8817.571 - 8877.149: 37.6400% ( 501) 00:08:55.669 8877.149 - 8936.727: 41.4947% ( 523) 00:08:55.669 8936.727 - 8996.305: 45.4452% ( 536) 00:08:55.669 8996.305 - 9055.884: 49.1966% ( 509) 00:08:55.669 9055.884 - 9115.462: 52.9260% ( 506) 00:08:55.669 9115.462 - 9175.040: 56.7512% ( 519) 00:08:55.669 9175.040 - 9234.618: 60.1857% ( 466) 00:08:55.669 9234.618 - 9294.196: 63.5908% ( 462) 00:08:55.669 9294.196 - 9353.775: 66.8632% ( 444) 00:08:55.669 9353.775 - 9413.353: 69.6197% ( 374) 00:08:55.669 9413.353 - 9472.931: 72.1035% ( 337) 00:08:55.669 9472.931 - 9532.509: 74.3956% ( 311) 00:08:55.669 9532.509 - 9592.087: 76.3488% ( 265) 00:08:55.669 9592.087 - 9651.665: 77.9850% ( 222) 00:08:55.669 9651.665 - 9711.244: 79.5991% ( 219) 00:08:55.669 9711.244 - 9770.822: 80.9847% ( 188) 00:08:55.669 9770.822 - 9830.400: 82.3703% ( 188) 00:08:55.669 9830.400 - 9889.978: 83.7190% ( 183) 00:08:55.669 9889.978 - 9949.556: 84.9351% ( 165) 00:08:55.669 9949.556 - 10009.135: 86.1955% ( 171) 00:08:55.669 10009.135 - 10068.713: 87.3452% ( 156) 00:08:55.669 10068.713 - 10128.291: 88.4876% ( 155) 00:08:55.669 10128.291 - 10187.869: 89.5637% ( 146) 00:08:55.669 10187.869 - 10247.447: 90.6103% ( 142) 00:08:55.669 10247.447 - 10307.025: 91.6126% ( 136) 00:08:55.669 10307.025 - 10366.604: 92.6666% ( 143) 00:08:55.669 10366.604 - 10426.182: 93.6542% ( 134) 00:08:55.669 10426.182 - 10485.760: 94.5976% ( 128) 00:08:55.669 10485.760 - 10545.338: 95.4894% ( 121) 00:08:55.669 10545.338 - 10604.916: 96.2485% ( 103) 00:08:55.669 10604.916 - 10664.495: 96.8308% ( 79) 00:08:55.669 10664.495 - 10724.073: 97.3320% ( 68) 00:08:55.669 10724.073 - 10783.651: 97.7373% ( 55) 00:08:55.669 10783.651 - 10843.229: 97.9732% ( 32) 00:08:55.669 10843.229 - 10902.807: 98.1206% ( 20) 00:08:55.669 10902.807 - 10962.385: 98.3048% ( 25) 00:08:55.669 10962.385 - 11021.964: 98.4080% ( 14) 00:08:55.669 11021.964 - 11081.542: 98.5702% ( 22) 00:08:55.669 11081.542 - 11141.120: 98.6586% ( 12) 00:08:55.669 11141.120 - 11200.698: 98.7249% ( 9) 00:08:55.669 11200.698 - 11260.276: 98.7913% ( 9) 00:08:55.669 11260.276 - 11319.855: 98.8281% ( 5) 00:08:55.669 11319.855 - 11379.433: 98.8650% ( 5) 00:08:55.669 11379.433 - 11439.011: 98.8871% ( 3) 00:08:55.669 11439.011 - 11498.589: 98.9018% ( 2) 00:08:55.669 11498.589 - 11558.167: 98.9239% ( 3) 00:08:55.669 11558.167 - 11617.745: 98.9460% ( 3) 00:08:55.669 11617.745 - 11677.324: 98.9534% ( 1) 00:08:55.669 11677.324 - 11736.902: 98.9755% ( 3) 00:08:55.669 11736.902 - 11796.480: 98.9976% ( 3) 00:08:55.669 11796.480 - 11856.058: 99.0198% ( 3) 00:08:55.669 11856.058 - 11915.636: 99.0345% ( 2) 00:08:55.669 11915.636 - 11975.215: 99.0492% ( 2) 00:08:55.669 11975.215 - 12034.793: 99.0566% ( 1) 00:08:55.669 33602.095 - 33840.407: 99.0861% ( 4) 00:08:55.669 33840.407 - 34078.720: 99.1229% ( 5) 00:08:55.669 34078.720 - 34317.033: 99.1672% ( 6) 00:08:55.669 34317.033 - 34555.345: 99.2188% ( 7) 00:08:55.669 34555.345 - 34793.658: 99.2630% ( 6) 00:08:55.669 34793.658 - 35031.971: 99.2998% ( 5) 00:08:55.669 35031.971 - 35270.284: 99.3514% ( 7) 00:08:55.669 35270.284 - 35508.596: 99.3883% ( 5) 00:08:55.669 35508.596 - 35746.909: 99.4399% ( 7) 00:08:55.669 35746.909 - 35985.222: 99.4841% ( 6) 00:08:55.669 35985.222 - 36223.535: 99.5209% ( 5) 00:08:55.669 36223.535 - 36461.847: 99.5283% ( 1) 00:08:55.669 40989.789 - 41228.102: 99.5357% ( 1) 00:08:55.669 41228.102 - 41466.415: 99.5652% ( 4) 00:08:55.669 41466.415 - 41704.727: 99.6167% ( 7) 00:08:55.669 41704.727 - 41943.040: 99.6610% ( 6) 00:08:55.669 41943.040 - 42181.353: 99.7052% ( 6) 00:08:55.669 42181.353 - 42419.665: 99.7420% ( 5) 00:08:55.669 42419.665 - 42657.978: 99.7863% ( 6) 00:08:55.669 42657.978 - 42896.291: 99.8379% ( 7) 00:08:55.669 42896.291 - 43134.604: 99.8821% ( 6) 00:08:55.669 43134.604 - 43372.916: 99.9189% ( 5) 00:08:55.669 43372.916 - 43611.229: 99.9631% ( 6) 00:08:55.669 43611.229 - 43849.542: 100.0000% ( 5) 00:08:55.669 00:08:55.669 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:55.669 ============================================================================== 00:08:55.669 Range in us Cumulative IO count 00:08:55.669 7864.320 - 7923.898: 0.0074% ( 1) 00:08:55.669 7923.898 - 7983.476: 0.0442% ( 5) 00:08:55.669 7983.476 - 8043.055: 0.2137% ( 23) 00:08:55.669 8043.055 - 8102.633: 0.5528% ( 46) 00:08:55.669 8102.633 - 8162.211: 1.0392% ( 66) 00:08:55.669 8162.211 - 8221.789: 1.7394% ( 95) 00:08:55.669 8221.789 - 8281.367: 2.8007% ( 144) 00:08:55.669 8281.367 - 8340.945: 4.4517% ( 224) 00:08:55.669 8340.945 - 8400.524: 6.7217% ( 308) 00:08:55.669 8400.524 - 8460.102: 9.5224% ( 380) 00:08:55.669 8460.102 - 8519.680: 12.9791% ( 469) 00:08:55.669 8519.680 - 8579.258: 16.7895% ( 517) 00:08:55.669 8579.258 - 8638.836: 20.7695% ( 540) 00:08:55.669 8638.836 - 8698.415: 24.9042% ( 561) 00:08:55.669 8698.415 - 8757.993: 29.1200% ( 572) 00:08:55.669 8757.993 - 8817.571: 33.4832% ( 592) 00:08:55.669 8817.571 - 8877.149: 37.9348% ( 604) 00:08:55.669 8877.149 - 8936.727: 42.3718% ( 602) 00:08:55.669 8936.727 - 8996.305: 46.6465% ( 580) 00:08:55.669 8996.305 - 9055.884: 50.8255% ( 567) 00:08:55.669 9055.884 - 9115.462: 54.8496% ( 546) 00:08:55.669 9115.462 - 9175.040: 58.6675% ( 518) 00:08:55.669 9175.040 - 9234.618: 62.1094% ( 467) 00:08:55.669 9234.618 - 9294.196: 65.1165% ( 408) 00:08:55.669 9294.196 - 9353.775: 67.7182% ( 353) 00:08:55.669 9353.775 - 9413.353: 70.0029% ( 310) 00:08:55.669 9413.353 - 9472.931: 72.1624% ( 293) 00:08:55.669 9472.931 - 9532.509: 73.9976% ( 249) 00:08:55.669 9532.509 - 9592.087: 75.7886% ( 243) 00:08:55.669 9592.087 - 9651.665: 77.4396% ( 224) 00:08:55.669 9651.665 - 9711.244: 78.9947% ( 211) 00:08:55.669 9711.244 - 9770.822: 80.5719% ( 214) 00:08:55.669 9770.822 - 9830.400: 82.0534% ( 201) 00:08:55.669 9830.400 - 9889.978: 83.5274% ( 200) 00:08:55.669 9889.978 - 9949.556: 84.9794% ( 197) 00:08:55.669 9949.556 - 10009.135: 86.3576% ( 187) 00:08:55.669 10009.135 - 10068.713: 87.8022% ( 196) 00:08:55.669 10068.713 - 10128.291: 89.1362% ( 181) 00:08:55.669 10128.291 - 10187.869: 90.4702% ( 181) 00:08:55.669 10187.869 - 10247.447: 91.7674% ( 176) 00:08:55.669 10247.447 - 10307.025: 93.0056% ( 168) 00:08:55.669 10307.025 - 10366.604: 94.1111% ( 150) 00:08:55.669 10366.604 - 10426.182: 95.1651% ( 143) 00:08:55.669 10426.182 - 10485.760: 96.0274% ( 117) 00:08:55.669 10485.760 - 10545.338: 96.7129% ( 93) 00:08:55.669 10545.338 - 10604.916: 97.3025% ( 80) 00:08:55.669 10604.916 - 10664.495: 97.7521% ( 61) 00:08:55.669 10664.495 - 10724.073: 98.0321% ( 38) 00:08:55.669 10724.073 - 10783.651: 98.2459% ( 29) 00:08:55.669 10783.651 - 10843.229: 98.4080% ( 22) 00:08:55.669 10843.229 - 10902.807: 98.5628% ( 21) 00:08:55.669 10902.807 - 10962.385: 98.6881% ( 17) 00:08:55.669 10962.385 - 11021.964: 98.7986% ( 15) 00:08:55.669 11021.964 - 11081.542: 98.9092% ( 15) 00:08:55.669 11081.542 - 11141.120: 98.9829% ( 10) 00:08:55.669 11141.120 - 11200.698: 99.0419% ( 8) 00:08:55.669 11200.698 - 11260.276: 99.0566% ( 2) 00:08:55.669 31457.280 - 31695.593: 99.0640% ( 1) 00:08:55.669 31695.593 - 31933.905: 99.1156% ( 7) 00:08:55.669 31933.905 - 32172.218: 99.1672% ( 7) 00:08:55.669 32172.218 - 32410.531: 99.2040% ( 5) 00:08:55.669 32410.531 - 32648.844: 99.2556% ( 7) 00:08:55.669 32648.844 - 32887.156: 99.2998% ( 6) 00:08:55.669 32887.156 - 33125.469: 99.3440% ( 6) 00:08:55.669 33125.469 - 33363.782: 99.3956% ( 7) 00:08:55.669 33363.782 - 33602.095: 99.4399% ( 6) 00:08:55.669 33602.095 - 33840.407: 99.4841% ( 6) 00:08:55.669 33840.407 - 34078.720: 99.5283% ( 6) 00:08:55.669 38606.662 - 38844.975: 99.5652% ( 5) 00:08:55.669 38844.975 - 39083.287: 99.6167% ( 7) 00:08:55.669 39083.287 - 39321.600: 99.6536% ( 5) 00:08:55.669 39321.600 - 39559.913: 99.6978% ( 6) 00:08:55.669 39559.913 - 39798.225: 99.7494% ( 7) 00:08:55.669 39798.225 - 40036.538: 99.7936% ( 6) 00:08:55.669 40036.538 - 40274.851: 99.8379% ( 6) 00:08:55.669 40274.851 - 40513.164: 99.8894% ( 7) 00:08:55.669 40513.164 - 40751.476: 99.9410% ( 7) 00:08:55.669 40751.476 - 40989.789: 99.9853% ( 6) 00:08:55.669 40989.789 - 41228.102: 100.0000% ( 2) 00:08:55.669 00:08:55.669 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:55.669 ============================================================================== 00:08:55.669 Range in us Cumulative IO count 00:08:55.670 7864.320 - 7923.898: 0.0516% ( 7) 00:08:55.670 7923.898 - 7983.476: 0.1400% ( 12) 00:08:55.670 7983.476 - 8043.055: 0.3096% ( 23) 00:08:55.670 8043.055 - 8102.633: 0.6117% ( 41) 00:08:55.670 8102.633 - 8162.211: 1.1277% ( 70) 00:08:55.670 8162.211 - 8221.789: 1.9163% ( 107) 00:08:55.670 8221.789 - 8281.367: 3.0439% ( 153) 00:08:55.670 8281.367 - 8340.945: 4.6506% ( 218) 00:08:55.670 8340.945 - 8400.524: 6.8175% ( 294) 00:08:55.670 8400.524 - 8460.102: 9.6035% ( 378) 00:08:55.670 8460.102 - 8519.680: 13.0085% ( 462) 00:08:55.670 8519.680 - 8579.258: 16.7600% ( 509) 00:08:55.670 8579.258 - 8638.836: 20.8063% ( 549) 00:08:55.670 8638.836 - 8698.415: 25.0516% ( 576) 00:08:55.670 8698.415 - 8757.993: 29.3558% ( 584) 00:08:55.670 8757.993 - 8817.571: 33.6306% ( 580) 00:08:55.670 8817.571 - 8877.149: 38.0159% ( 595) 00:08:55.670 8877.149 - 8936.727: 42.4602% ( 603) 00:08:55.670 8936.727 - 8996.305: 46.7644% ( 584) 00:08:55.670 8996.305 - 9055.884: 51.0245% ( 578) 00:08:55.670 9055.884 - 9115.462: 55.0708% ( 549) 00:08:55.670 9115.462 - 9175.040: 59.0065% ( 534) 00:08:55.670 9175.040 - 9234.618: 62.3526% ( 454) 00:08:55.670 9234.618 - 9294.196: 65.3597% ( 408) 00:08:55.670 9294.196 - 9353.775: 68.1383% ( 377) 00:08:55.670 9353.775 - 9413.353: 70.7179% ( 350) 00:08:55.670 9413.353 - 9472.931: 72.7078% ( 270) 00:08:55.670 9472.931 - 9532.509: 74.5504% ( 250) 00:08:55.670 9532.509 - 9592.087: 76.2751% ( 234) 00:08:55.670 9592.087 - 9651.665: 77.9776% ( 231) 00:08:55.670 9651.665 - 9711.244: 79.5180% ( 209) 00:08:55.670 9711.244 - 9770.822: 80.9847% ( 199) 00:08:55.670 9770.822 - 9830.400: 82.4292% ( 196) 00:08:55.670 9830.400 - 9889.978: 83.7927% ( 185) 00:08:55.670 9889.978 - 9949.556: 85.1415% ( 183) 00:08:55.670 9949.556 - 10009.135: 86.4534% ( 178) 00:08:55.670 10009.135 - 10068.713: 87.7653% ( 178) 00:08:55.670 10068.713 - 10128.291: 89.0846% ( 179) 00:08:55.670 10128.291 - 10187.869: 90.3818% ( 176) 00:08:55.670 10187.869 - 10247.447: 91.6347% ( 170) 00:08:55.670 10247.447 - 10307.025: 92.8435% ( 164) 00:08:55.670 10307.025 - 10366.604: 93.9711% ( 153) 00:08:55.670 10366.604 - 10426.182: 94.9808% ( 137) 00:08:55.670 10426.182 - 10485.760: 95.8284% ( 115) 00:08:55.670 10485.760 - 10545.338: 96.5949% ( 104) 00:08:55.670 10545.338 - 10604.916: 97.2067% ( 83) 00:08:55.670 10604.916 - 10664.495: 97.7521% ( 74) 00:08:55.670 10664.495 - 10724.073: 98.0837% ( 45) 00:08:55.670 10724.073 - 10783.651: 98.3196% ( 32) 00:08:55.670 10783.651 - 10843.229: 98.5038% ( 25) 00:08:55.670 10843.229 - 10902.807: 98.6512% ( 20) 00:08:55.670 10902.807 - 10962.385: 98.7765% ( 17) 00:08:55.670 10962.385 - 11021.964: 98.8650% ( 12) 00:08:55.670 11021.964 - 11081.542: 98.9608% ( 13) 00:08:55.670 11081.542 - 11141.120: 98.9976% ( 5) 00:08:55.670 11141.120 - 11200.698: 99.0198% ( 3) 00:08:55.670 11200.698 - 11260.276: 99.0419% ( 3) 00:08:55.670 11260.276 - 11319.855: 99.0566% ( 2) 00:08:55.670 29312.465 - 29431.622: 99.0713% ( 2) 00:08:55.670 29431.622 - 29550.778: 99.0935% ( 3) 00:08:55.670 29550.778 - 29669.935: 99.1229% ( 4) 00:08:55.670 29669.935 - 29789.091: 99.1450% ( 3) 00:08:55.670 29789.091 - 29908.247: 99.1598% ( 2) 00:08:55.670 29908.247 - 30027.404: 99.1893% ( 4) 00:08:55.670 30027.404 - 30146.560: 99.2040% ( 2) 00:08:55.670 30146.560 - 30265.716: 99.2261% ( 3) 00:08:55.670 30265.716 - 30384.873: 99.2556% ( 4) 00:08:55.670 30384.873 - 30504.029: 99.2777% ( 3) 00:08:55.670 30504.029 - 30742.342: 99.3219% ( 6) 00:08:55.670 30742.342 - 30980.655: 99.3662% ( 6) 00:08:55.670 30980.655 - 31218.967: 99.4104% ( 6) 00:08:55.670 31218.967 - 31457.280: 99.4546% ( 6) 00:08:55.670 31457.280 - 31695.593: 99.5062% ( 7) 00:08:55.670 31695.593 - 31933.905: 99.5283% ( 3) 00:08:55.670 36461.847 - 36700.160: 99.5725% ( 6) 00:08:55.670 36700.160 - 36938.473: 99.6167% ( 6) 00:08:55.670 36938.473 - 37176.785: 99.6610% ( 6) 00:08:55.670 37176.785 - 37415.098: 99.7052% ( 6) 00:08:55.670 37415.098 - 37653.411: 99.7568% ( 7) 00:08:55.670 37653.411 - 37891.724: 99.8084% ( 7) 00:08:55.670 37891.724 - 38130.036: 99.8526% ( 6) 00:08:55.670 38130.036 - 38368.349: 99.9042% ( 7) 00:08:55.670 38368.349 - 38606.662: 99.9558% ( 7) 00:08:55.670 38606.662 - 38844.975: 100.0000% ( 6) 00:08:55.670 00:08:55.670 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:55.670 ============================================================================== 00:08:55.670 Range in us Cumulative IO count 00:08:55.670 7864.320 - 7923.898: 0.0442% ( 6) 00:08:55.670 7923.898 - 7983.476: 0.1400% ( 13) 00:08:55.670 7983.476 - 8043.055: 0.3169% ( 24) 00:08:55.670 8043.055 - 8102.633: 0.6486% ( 45) 00:08:55.670 8102.633 - 8162.211: 1.1645% ( 70) 00:08:55.670 8162.211 - 8221.789: 1.9679% ( 109) 00:08:55.670 8221.789 - 8281.367: 3.1471% ( 160) 00:08:55.670 8281.367 - 8340.945: 4.8275% ( 228) 00:08:55.670 8340.945 - 8400.524: 7.0607% ( 303) 00:08:55.670 8400.524 - 8460.102: 9.7730% ( 368) 00:08:55.670 8460.102 - 8519.680: 12.9348% ( 429) 00:08:55.670 8519.680 - 8579.258: 16.6568% ( 505) 00:08:55.670 8579.258 - 8638.836: 20.7252% ( 552) 00:08:55.670 8638.836 - 8698.415: 24.9558% ( 574) 00:08:55.670 8698.415 - 8757.993: 29.4295% ( 607) 00:08:55.670 8757.993 - 8817.571: 33.7485% ( 586) 00:08:55.670 8817.571 - 8877.149: 38.2444% ( 610) 00:08:55.670 8877.149 - 8936.727: 42.6002% ( 591) 00:08:55.670 8936.727 - 8996.305: 46.9045% ( 584) 00:08:55.670 8996.305 - 9055.884: 51.0834% ( 567) 00:08:55.670 9055.884 - 9115.462: 55.0708% ( 541) 00:08:55.670 9115.462 - 9175.040: 58.7559% ( 500) 00:08:55.670 9175.040 - 9234.618: 62.1683% ( 463) 00:08:55.670 9234.618 - 9294.196: 65.2344% ( 416) 00:08:55.670 9294.196 - 9353.775: 68.1604% ( 397) 00:08:55.670 9353.775 - 9413.353: 70.5999% ( 331) 00:08:55.670 9413.353 - 9472.931: 72.6931% ( 284) 00:08:55.670 9472.931 - 9532.509: 74.5357% ( 250) 00:08:55.670 9532.509 - 9592.087: 76.2529% ( 233) 00:08:55.670 9592.087 - 9651.665: 77.8302% ( 214) 00:08:55.670 9651.665 - 9711.244: 79.3779% ( 210) 00:08:55.670 9711.244 - 9770.822: 80.8520% ( 200) 00:08:55.670 9770.822 - 9830.400: 82.3408% ( 202) 00:08:55.670 9830.400 - 9889.978: 83.7264% ( 188) 00:08:55.670 9889.978 - 9949.556: 85.1047% ( 187) 00:08:55.670 9949.556 - 10009.135: 86.4976% ( 189) 00:08:55.670 10009.135 - 10068.713: 87.8538% ( 184) 00:08:55.670 10068.713 - 10128.291: 89.2394% ( 188) 00:08:55.670 10128.291 - 10187.869: 90.5071% ( 172) 00:08:55.670 10187.869 - 10247.447: 91.7748% ( 172) 00:08:55.670 10247.447 - 10307.025: 92.9982% ( 166) 00:08:55.670 10307.025 - 10366.604: 94.1185% ( 152) 00:08:55.670 10366.604 - 10426.182: 95.1356% ( 138) 00:08:55.670 10426.182 - 10485.760: 95.9979% ( 117) 00:08:55.670 10485.760 - 10545.338: 96.7202% ( 98) 00:08:55.670 10545.338 - 10604.916: 97.2877% ( 77) 00:08:55.670 10604.916 - 10664.495: 97.7373% ( 61) 00:08:55.670 10664.495 - 10724.073: 98.0837% ( 47) 00:08:55.670 10724.073 - 10783.651: 98.3048% ( 30) 00:08:55.670 10783.651 - 10843.229: 98.4449% ( 19) 00:08:55.670 10843.229 - 10902.807: 98.5849% ( 19) 00:08:55.670 10902.807 - 10962.385: 98.7028% ( 16) 00:08:55.670 10962.385 - 11021.964: 98.7986% ( 13) 00:08:55.670 11021.964 - 11081.542: 98.8871% ( 12) 00:08:55.670 11081.542 - 11141.120: 98.9534% ( 9) 00:08:55.670 11141.120 - 11200.698: 99.0050% ( 7) 00:08:55.670 11200.698 - 11260.276: 99.0345% ( 4) 00:08:55.670 11260.276 - 11319.855: 99.0419% ( 1) 00:08:55.670 11319.855 - 11379.433: 99.0566% ( 2) 00:08:55.670 26571.869 - 26691.025: 99.0713% ( 2) 00:08:55.670 26691.025 - 26810.182: 99.0861% ( 2) 00:08:55.670 26810.182 - 26929.338: 99.1082% ( 3) 00:08:55.670 26929.338 - 27048.495: 99.1377% ( 4) 00:08:55.670 27048.495 - 27167.651: 99.1598% ( 3) 00:08:55.670 27167.651 - 27286.807: 99.1819% ( 3) 00:08:55.670 27286.807 - 27405.964: 99.2040% ( 3) 00:08:55.670 27405.964 - 27525.120: 99.2261% ( 3) 00:08:55.670 27525.120 - 27644.276: 99.2556% ( 4) 00:08:55.670 27644.276 - 27763.433: 99.2703% ( 2) 00:08:55.670 27763.433 - 27882.589: 99.2925% ( 3) 00:08:55.670 27882.589 - 28001.745: 99.3146% ( 3) 00:08:55.670 28001.745 - 28120.902: 99.3367% ( 3) 00:08:55.670 28120.902 - 28240.058: 99.3662% ( 4) 00:08:55.670 28240.058 - 28359.215: 99.3883% ( 3) 00:08:55.670 28359.215 - 28478.371: 99.4104% ( 3) 00:08:55.670 28478.371 - 28597.527: 99.4325% ( 3) 00:08:55.670 28597.527 - 28716.684: 99.4472% ( 2) 00:08:55.670 28716.684 - 28835.840: 99.4693% ( 3) 00:08:55.670 28835.840 - 28954.996: 99.4915% ( 3) 00:08:55.670 28954.996 - 29074.153: 99.5209% ( 4) 00:08:55.670 29074.153 - 29193.309: 99.5283% ( 1) 00:08:55.670 33602.095 - 33840.407: 99.5504% ( 3) 00:08:55.670 33840.407 - 34078.720: 99.6020% ( 7) 00:08:55.670 34078.720 - 34317.033: 99.6536% ( 7) 00:08:55.670 34317.033 - 34555.345: 99.7052% ( 7) 00:08:55.670 34555.345 - 34793.658: 99.7494% ( 6) 00:08:55.670 34793.658 - 35031.971: 99.8010% ( 7) 00:08:55.670 35031.971 - 35270.284: 99.8452% ( 6) 00:08:55.670 35270.284 - 35508.596: 99.8894% ( 6) 00:08:55.670 35508.596 - 35746.909: 99.9410% ( 7) 00:08:55.670 35746.909 - 35985.222: 99.9926% ( 7) 00:08:55.670 35985.222 - 36223.535: 100.0000% ( 1) 00:08:55.670 00:08:55.670 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:55.670 ============================================================================== 00:08:55.670 Range in us Cumulative IO count 00:08:55.670 7864.320 - 7923.898: 0.0516% ( 7) 00:08:55.670 7923.898 - 7983.476: 0.1621% ( 15) 00:08:55.670 7983.476 - 8043.055: 0.2874% ( 17) 00:08:55.670 8043.055 - 8102.633: 0.6117% ( 44) 00:08:55.670 8102.633 - 8162.211: 1.1571% ( 74) 00:08:55.670 8162.211 - 8221.789: 1.9458% ( 107) 00:08:55.670 8221.789 - 8281.367: 3.2061% ( 171) 00:08:55.670 8281.367 - 8340.945: 4.8349% ( 221) 00:08:55.670 8340.945 - 8400.524: 7.0018% ( 294) 00:08:55.670 8400.524 - 8460.102: 9.6624% ( 361) 00:08:55.670 8460.102 - 8519.680: 12.8685% ( 435) 00:08:55.670 8519.680 - 8579.258: 16.7453% ( 526) 00:08:55.670 8579.258 - 8638.836: 20.8358% ( 555) 00:08:55.670 8638.836 - 8698.415: 25.1474% ( 585) 00:08:55.670 8698.415 - 8757.993: 29.4517% ( 584) 00:08:55.670 8757.993 - 8817.571: 33.8296% ( 594) 00:08:55.670 8817.571 - 8877.149: 38.2297% ( 597) 00:08:55.670 8877.149 - 8936.727: 42.6076% ( 594) 00:08:55.670 8936.727 - 8996.305: 46.9487% ( 589) 00:08:55.670 8996.305 - 9055.884: 51.1498% ( 570) 00:08:55.670 9055.884 - 9115.462: 55.1076% ( 537) 00:08:55.670 9115.462 - 9175.040: 58.8665% ( 510) 00:08:55.670 9175.040 - 9234.618: 62.2863% ( 464) 00:08:55.670 9234.618 - 9294.196: 65.3523% ( 416) 00:08:55.670 9294.196 - 9353.775: 68.0719% ( 369) 00:08:55.670 9353.775 - 9413.353: 70.5262% ( 333) 00:08:55.670 9413.353 - 9472.931: 72.5825% ( 279) 00:08:55.670 9472.931 - 9532.509: 74.5209% ( 263) 00:08:55.670 9532.509 - 9592.087: 76.2603% ( 236) 00:08:55.670 9592.087 - 9651.665: 77.8302% ( 213) 00:08:55.670 9651.665 - 9711.244: 79.3485% ( 206) 00:08:55.670 9711.244 - 9770.822: 80.8004% ( 197) 00:08:55.670 9770.822 - 9830.400: 82.2524% ( 197) 00:08:55.670 9830.400 - 9889.978: 83.6453% ( 189) 00:08:55.670 9889.978 - 9949.556: 85.0088% ( 185) 00:08:55.670 9949.556 - 10009.135: 86.4166% ( 191) 00:08:55.670 10009.135 - 10068.713: 87.8022% ( 188) 00:08:55.670 10068.713 - 10128.291: 89.2025% ( 190) 00:08:55.670 10128.291 - 10187.869: 90.5808% ( 187) 00:08:55.670 10187.869 - 10247.447: 91.8411% ( 171) 00:08:55.670 10247.447 - 10307.025: 93.0719% ( 167) 00:08:55.670 10307.025 - 10366.604: 94.1554% ( 147) 00:08:55.670 10366.604 - 10426.182: 95.1651% ( 137) 00:08:55.670 10426.182 - 10485.760: 96.0053% ( 114) 00:08:55.670 10485.760 - 10545.338: 96.7497% ( 101) 00:08:55.670 10545.338 - 10604.916: 97.3172% ( 77) 00:08:55.670 10604.916 - 10664.495: 97.7815% ( 63) 00:08:55.670 10664.495 - 10724.073: 98.0985% ( 43) 00:08:55.670 10724.073 - 10783.651: 98.3048% ( 28) 00:08:55.670 10783.651 - 10843.229: 98.4891% ( 25) 00:08:55.670 10843.229 - 10902.807: 98.6291% ( 19) 00:08:55.670 10902.807 - 10962.385: 98.7544% ( 17) 00:08:55.670 10962.385 - 11021.964: 98.8355% ( 11) 00:08:55.670 11021.964 - 11081.542: 98.9092% ( 10) 00:08:55.670 11081.542 - 11141.120: 98.9682% ( 8) 00:08:55.670 11141.120 - 11200.698: 98.9976% ( 4) 00:08:55.670 11200.698 - 11260.276: 99.0198% ( 3) 00:08:55.670 11260.276 - 11319.855: 99.0419% ( 3) 00:08:55.670 11319.855 - 11379.433: 99.0566% ( 2) 00:08:55.670 23831.273 - 23950.429: 99.0713% ( 2) 00:08:55.670 23950.429 - 24069.585: 99.1008% ( 4) 00:08:55.670 24069.585 - 24188.742: 99.1229% ( 3) 00:08:55.670 24188.742 - 24307.898: 99.1450% ( 3) 00:08:55.670 24307.898 - 24427.055: 99.1672% ( 3) 00:08:55.670 24427.055 - 24546.211: 99.1893% ( 3) 00:08:55.670 24546.211 - 24665.367: 99.2114% ( 3) 00:08:55.670 24665.367 - 24784.524: 99.2335% ( 3) 00:08:55.670 24784.524 - 24903.680: 99.2556% ( 3) 00:08:55.670 24903.680 - 25022.836: 99.2777% ( 3) 00:08:55.670 25022.836 - 25141.993: 99.3072% ( 4) 00:08:55.670 25141.993 - 25261.149: 99.3219% ( 2) 00:08:55.670 25261.149 - 25380.305: 99.3514% ( 4) 00:08:55.670 25380.305 - 25499.462: 99.3735% ( 3) 00:08:55.670 25499.462 - 25618.618: 99.3956% ( 3) 00:08:55.670 25618.618 - 25737.775: 99.4177% ( 3) 00:08:55.670 25737.775 - 25856.931: 99.4399% ( 3) 00:08:55.670 25856.931 - 25976.087: 99.4620% ( 3) 00:08:55.670 25976.087 - 26095.244: 99.4841% ( 3) 00:08:55.670 26095.244 - 26214.400: 99.5062% ( 3) 00:08:55.670 26214.400 - 26333.556: 99.5283% ( 3) 00:08:55.670 30980.655 - 31218.967: 99.5725% ( 6) 00:08:55.670 31218.967 - 31457.280: 99.6167% ( 6) 00:08:55.670 31457.280 - 31695.593: 99.6610% ( 6) 00:08:55.670 31695.593 - 31933.905: 99.7126% ( 7) 00:08:55.670 31933.905 - 32172.218: 99.7568% ( 6) 00:08:55.670 32172.218 - 32410.531: 99.8084% ( 7) 00:08:55.670 32410.531 - 32648.844: 99.8600% ( 7) 00:08:55.670 32648.844 - 32887.156: 99.9042% ( 6) 00:08:55.670 32887.156 - 33125.469: 99.9558% ( 7) 00:08:55.670 33125.469 - 33363.782: 100.0000% ( 6) 00:08:55.670 00:08:55.670 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:55.670 ============================================================================== 00:08:55.670 Range in us Cumulative IO count 00:08:55.670 7923.898 - 7983.476: 0.0884% ( 12) 00:08:55.670 7983.476 - 8043.055: 0.2432% ( 21) 00:08:55.670 8043.055 - 8102.633: 0.5675% ( 44) 00:08:55.670 8102.633 - 8162.211: 1.0613% ( 67) 00:08:55.670 8162.211 - 8221.789: 1.8352% ( 105) 00:08:55.670 8221.789 - 8281.367: 2.9629% ( 153) 00:08:55.670 8281.367 - 8340.945: 4.5548% ( 216) 00:08:55.670 8340.945 - 8400.524: 6.6701% ( 287) 00:08:55.670 8400.524 - 8460.102: 9.4413% ( 376) 00:08:55.670 8460.102 - 8519.680: 12.8169% ( 458) 00:08:55.670 8519.680 - 8579.258: 16.6642% ( 522) 00:08:55.670 8579.258 - 8638.836: 20.6958% ( 547) 00:08:55.670 8638.836 - 8698.415: 25.0958% ( 597) 00:08:55.670 8698.415 - 8757.993: 29.3853% ( 582) 00:08:55.670 8757.993 - 8817.571: 33.8075% ( 600) 00:08:55.670 8817.571 - 8877.149: 38.2223% ( 599) 00:08:55.670 8877.149 - 8936.727: 42.6960% ( 607) 00:08:55.670 8936.727 - 8996.305: 47.0077% ( 585) 00:08:55.670 8996.305 - 9055.884: 51.2235% ( 572) 00:08:55.670 9055.884 - 9115.462: 55.3066% ( 554) 00:08:55.670 9115.462 - 9175.040: 59.0654% ( 510) 00:08:55.670 9175.040 - 9234.618: 62.4779% ( 463) 00:08:55.670 9234.618 - 9294.196: 65.4923% ( 409) 00:08:55.670 9294.196 - 9353.775: 68.1825% ( 365) 00:08:55.670 9353.775 - 9413.353: 70.6147% ( 330) 00:08:55.670 9413.353 - 9472.931: 72.6341% ( 274) 00:08:55.670 9472.931 - 9532.509: 74.5357% ( 258) 00:08:55.670 9532.509 - 9592.087: 76.2972% ( 239) 00:08:55.670 9592.087 - 9651.665: 78.0144% ( 233) 00:08:55.670 9651.665 - 9711.244: 79.5327% ( 206) 00:08:55.670 9711.244 - 9770.822: 81.0142% ( 201) 00:08:55.670 9770.822 - 9830.400: 82.4735% ( 198) 00:08:55.670 9830.400 - 9889.978: 83.9107% ( 195) 00:08:55.670 9889.978 - 9949.556: 85.2447% ( 181) 00:08:55.670 9949.556 - 10009.135: 86.6450% ( 190) 00:08:55.670 10009.135 - 10068.713: 87.9938% ( 183) 00:08:55.670 10068.713 - 10128.291: 89.2836% ( 175) 00:08:55.670 10128.291 - 10187.869: 90.5734% ( 175) 00:08:55.670 10187.869 - 10247.447: 91.8632% ( 175) 00:08:55.670 10247.447 - 10307.025: 93.0719% ( 164) 00:08:55.670 10307.025 - 10366.604: 94.1701% ( 149) 00:08:55.670 10366.604 - 10426.182: 95.1798% ( 137) 00:08:55.670 10426.182 - 10485.760: 96.0864% ( 123) 00:08:55.670 10485.760 - 10545.338: 96.8234% ( 100) 00:08:55.670 10545.338 - 10604.916: 97.4425% ( 84) 00:08:55.670 10604.916 - 10664.495: 97.8479% ( 55) 00:08:55.670 10664.495 - 10724.073: 98.1279% ( 38) 00:08:55.670 10724.073 - 10783.651: 98.3491% ( 30) 00:08:55.670 10783.651 - 10843.229: 98.5186% ( 23) 00:08:55.670 10843.229 - 10902.807: 98.6512% ( 18) 00:08:55.670 10902.807 - 10962.385: 98.7986% ( 20) 00:08:55.670 10962.385 - 11021.964: 98.8871% ( 12) 00:08:55.670 11021.964 - 11081.542: 98.9460% ( 8) 00:08:55.670 11081.542 - 11141.120: 98.9755% ( 4) 00:08:55.670 11141.120 - 11200.698: 98.9976% ( 3) 00:08:55.670 11200.698 - 11260.276: 99.0198% ( 3) 00:08:55.670 11260.276 - 11319.855: 99.0419% ( 3) 00:08:55.670 11319.855 - 11379.433: 99.0492% ( 1) 00:08:55.670 11379.433 - 11439.011: 99.0566% ( 1) 00:08:55.670 21090.676 - 21209.833: 99.0640% ( 1) 00:08:55.671 21209.833 - 21328.989: 99.0713% ( 1) 00:08:55.671 21328.989 - 21448.145: 99.1008% ( 4) 00:08:55.671 21448.145 - 21567.302: 99.1303% ( 4) 00:08:55.671 21567.302 - 21686.458: 99.1524% ( 3) 00:08:55.671 21686.458 - 21805.615: 99.1745% ( 3) 00:08:55.671 21805.615 - 21924.771: 99.1966% ( 3) 00:08:55.671 21924.771 - 22043.927: 99.2188% ( 3) 00:08:55.671 22043.927 - 22163.084: 99.2482% ( 4) 00:08:55.671 22163.084 - 22282.240: 99.2703% ( 3) 00:08:55.671 22282.240 - 22401.396: 99.2925% ( 3) 00:08:55.671 22401.396 - 22520.553: 99.3146% ( 3) 00:08:55.671 22520.553 - 22639.709: 99.3367% ( 3) 00:08:55.671 22639.709 - 22758.865: 99.3588% ( 3) 00:08:55.671 22758.865 - 22878.022: 99.3809% ( 3) 00:08:55.671 22878.022 - 22997.178: 99.4030% ( 3) 00:08:55.671 22997.178 - 23116.335: 99.4325% ( 4) 00:08:55.671 23116.335 - 23235.491: 99.4546% ( 3) 00:08:55.671 23235.491 - 23354.647: 99.4693% ( 2) 00:08:55.671 23354.647 - 23473.804: 99.4915% ( 3) 00:08:55.671 23473.804 - 23592.960: 99.5209% ( 4) 00:08:55.671 23592.960 - 23712.116: 99.5283% ( 1) 00:08:55.671 28240.058 - 28359.215: 99.5357% ( 1) 00:08:55.671 28359.215 - 28478.371: 99.5578% ( 3) 00:08:55.671 28478.371 - 28597.527: 99.5799% ( 3) 00:08:55.671 28597.527 - 28716.684: 99.6020% ( 3) 00:08:55.671 28716.684 - 28835.840: 99.6241% ( 3) 00:08:55.671 28835.840 - 28954.996: 99.6462% ( 3) 00:08:55.671 28954.996 - 29074.153: 99.6683% ( 3) 00:08:55.671 29074.153 - 29193.309: 99.6978% ( 4) 00:08:55.671 29193.309 - 29312.465: 99.7199% ( 3) 00:08:55.671 29312.465 - 29431.622: 99.7420% ( 3) 00:08:55.671 29431.622 - 29550.778: 99.7642% ( 3) 00:08:55.671 29550.778 - 29669.935: 99.7936% ( 4) 00:08:55.671 29669.935 - 29789.091: 99.8157% ( 3) 00:08:55.671 29789.091 - 29908.247: 99.8379% ( 3) 00:08:55.671 29908.247 - 30027.404: 99.8600% ( 3) 00:08:55.671 30027.404 - 30146.560: 99.8821% ( 3) 00:08:55.671 30146.560 - 30265.716: 99.9116% ( 4) 00:08:55.671 30265.716 - 30384.873: 99.9337% ( 3) 00:08:55.671 30384.873 - 30504.029: 99.9558% ( 3) 00:08:55.671 30504.029 - 30742.342: 100.0000% ( 6) 00:08:55.671 00:08:55.671 14:14:15 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:57.061 Initializing NVMe Controllers 00:08:57.061 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.061 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.061 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.061 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.061 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.061 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.061 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.061 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.061 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.061 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.061 Initialization complete. Launching workers. 00:08:57.061 ======================================================== 00:08:57.061 Latency(us) 00:08:57.061 Device Information : IOPS MiB/s Average min max 00:08:57.061 PCIE (0000:00:10.0) NSID 1 from core 0: 13247.30 155.24 9682.23 7892.67 36512.66 00:08:57.061 PCIE (0000:00:11.0) NSID 1 from core 0: 13247.30 155.24 9668.49 7991.20 35088.49 00:08:57.061 PCIE (0000:00:13.0) NSID 1 from core 0: 13247.30 155.24 9655.11 8055.13 33989.96 00:08:57.061 PCIE (0000:00:12.0) NSID 1 from core 0: 13247.30 155.24 9640.11 7941.89 32412.79 00:08:57.061 PCIE (0000:00:12.0) NSID 2 from core 0: 13247.30 155.24 9625.80 7959.98 30846.14 00:08:57.061 PCIE (0000:00:12.0) NSID 3 from core 0: 13247.30 155.24 9611.08 7891.50 29100.04 00:08:57.061 ======================================================== 00:08:57.061 Total : 79483.79 931.45 9647.14 7891.50 36512.66 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8162.211us 00:08:57.061 10.00000% : 8579.258us 00:08:57.061 25.00000% : 8877.149us 00:08:57.061 50.00000% : 9353.775us 00:08:57.061 75.00000% : 9830.400us 00:08:57.061 90.00000% : 10545.338us 00:08:57.061 95.00000% : 11439.011us 00:08:57.061 98.00000% : 13822.138us 00:08:57.061 99.00000% : 15013.702us 00:08:57.061 99.50000% : 27048.495us 00:08:57.061 99.90000% : 35985.222us 00:08:57.061 99.99000% : 36461.847us 00:08:57.061 99.99900% : 36700.160us 00:08:57.061 99.99990% : 36700.160us 00:08:57.061 99.99999% : 36700.160us 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8340.945us 00:08:57.061 10.00000% : 8698.415us 00:08:57.061 25.00000% : 8936.727us 00:08:57.061 50.00000% : 9294.196us 00:08:57.061 75.00000% : 9770.822us 00:08:57.061 90.00000% : 10485.760us 00:08:57.061 95.00000% : 11439.011us 00:08:57.061 98.00000% : 13702.982us 00:08:57.061 99.00000% : 15192.436us 00:08:57.061 99.50000% : 26691.025us 00:08:57.061 99.90000% : 34793.658us 00:08:57.061 99.99000% : 35270.284us 00:08:57.061 99.99900% : 35270.284us 00:08:57.061 99.99990% : 35270.284us 00:08:57.061 99.99999% : 35270.284us 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8340.945us 00:08:57.061 10.00000% : 8698.415us 00:08:57.061 25.00000% : 8936.727us 00:08:57.061 50.00000% : 9294.196us 00:08:57.061 75.00000% : 9770.822us 00:08:57.061 90.00000% : 10545.338us 00:08:57.061 95.00000% : 11319.855us 00:08:57.061 98.00000% : 13166.778us 00:08:57.061 99.00000% : 15371.171us 00:08:57.061 99.50000% : 26214.400us 00:08:57.061 99.90000% : 33840.407us 00:08:57.061 99.99000% : 34078.720us 00:08:57.061 99.99900% : 34078.720us 00:08:57.061 99.99990% : 34078.720us 00:08:57.061 99.99999% : 34078.720us 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8340.945us 00:08:57.061 10.00000% : 8698.415us 00:08:57.061 25.00000% : 8936.727us 00:08:57.061 50.00000% : 9294.196us 00:08:57.061 75.00000% : 9770.822us 00:08:57.061 90.00000% : 10485.760us 00:08:57.061 95.00000% : 11260.276us 00:08:57.061 98.00000% : 13226.356us 00:08:57.061 99.00000% : 15252.015us 00:08:57.061 99.50000% : 24784.524us 00:08:57.061 99.90000% : 32172.218us 00:08:57.061 99.99000% : 32410.531us 00:08:57.061 99.99900% : 32648.844us 00:08:57.061 99.99990% : 32648.844us 00:08:57.061 99.99999% : 32648.844us 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8340.945us 00:08:57.061 10.00000% : 8698.415us 00:08:57.061 25.00000% : 8936.727us 00:08:57.061 50.00000% : 9294.196us 00:08:57.061 75.00000% : 9770.822us 00:08:57.061 90.00000% : 10485.760us 00:08:57.061 95.00000% : 11141.120us 00:08:57.061 98.00000% : 13345.513us 00:08:57.061 99.00000% : 15371.171us 00:08:57.061 99.50000% : 23473.804us 00:08:57.061 99.90000% : 30742.342us 00:08:57.061 99.99000% : 30980.655us 00:08:57.061 99.99900% : 30980.655us 00:08:57.061 99.99990% : 30980.655us 00:08:57.061 99.99999% : 30980.655us 00:08:57.061 00:08:57.061 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:57.061 ================================================================================= 00:08:57.061 1.00000% : 8281.367us 00:08:57.061 10.00000% : 8698.415us 00:08:57.061 25.00000% : 8936.727us 00:08:57.061 50.00000% : 9294.196us 00:08:57.062 75.00000% : 9770.822us 00:08:57.062 90.00000% : 10485.760us 00:08:57.062 95.00000% : 11141.120us 00:08:57.062 98.00000% : 13405.091us 00:08:57.062 99.00000% : 14834.967us 00:08:57.062 99.50000% : 22163.084us 00:08:57.062 99.90000% : 28835.840us 00:08:57.062 99.99000% : 29193.309us 00:08:57.062 99.99900% : 29193.309us 00:08:57.062 99.99990% : 29193.309us 00:08:57.062 99.99999% : 29193.309us 00:08:57.062 00:08:57.062 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:57.062 ============================================================================== 00:08:57.062 Range in us Cumulative IO count 00:08:57.062 7864.320 - 7923.898: 0.1359% ( 18) 00:08:57.062 7923.898 - 7983.476: 0.3095% ( 23) 00:08:57.062 7983.476 - 8043.055: 0.3925% ( 11) 00:08:57.062 8043.055 - 8102.633: 0.7926% ( 53) 00:08:57.062 8102.633 - 8162.211: 1.1926% ( 53) 00:08:57.062 8162.211 - 8221.789: 1.6531% ( 61) 00:08:57.062 8221.789 - 8281.367: 2.1739% ( 69) 00:08:57.062 8281.367 - 8340.945: 2.9136% ( 98) 00:08:57.062 8340.945 - 8400.524: 4.0685% ( 153) 00:08:57.062 8400.524 - 8460.102: 5.9330% ( 247) 00:08:57.062 8460.102 - 8519.680: 8.0239% ( 277) 00:08:57.062 8519.680 - 8579.258: 10.7337% ( 359) 00:08:57.062 8579.258 - 8638.836: 13.5341% ( 371) 00:08:57.062 8638.836 - 8698.415: 16.4704% ( 389) 00:08:57.062 8698.415 - 8757.993: 19.5199% ( 404) 00:08:57.062 8757.993 - 8817.571: 22.7053% ( 422) 00:08:57.062 8817.571 - 8877.149: 26.2606% ( 471) 00:08:57.062 8877.149 - 8936.727: 29.4007% ( 416) 00:08:57.062 8936.727 - 8996.305: 32.5408% ( 416) 00:08:57.062 8996.305 - 9055.884: 35.7563% ( 426) 00:08:57.062 9055.884 - 9115.462: 39.0399% ( 435) 00:08:57.062 9115.462 - 9175.040: 42.1498% ( 412) 00:08:57.062 9175.040 - 9234.618: 45.4333% ( 435) 00:08:57.062 9234.618 - 9294.196: 48.8149% ( 448) 00:08:57.062 9294.196 - 9353.775: 52.2268% ( 452) 00:08:57.062 9353.775 - 9413.353: 55.5857% ( 445) 00:08:57.062 9413.353 - 9472.931: 59.0806% ( 463) 00:08:57.062 9472.931 - 9532.509: 62.3415% ( 432) 00:08:57.062 9532.509 - 9592.087: 65.2778% ( 389) 00:08:57.062 9592.087 - 9651.665: 68.0178% ( 363) 00:08:57.062 9651.665 - 9711.244: 70.7277% ( 359) 00:08:57.062 9711.244 - 9770.822: 73.0601% ( 309) 00:08:57.062 9770.822 - 9830.400: 75.4303% ( 314) 00:08:57.062 9830.400 - 9889.978: 77.6646% ( 296) 00:08:57.062 9889.978 - 9949.556: 79.3705% ( 226) 00:08:57.062 9949.556 - 10009.135: 81.0160% ( 218) 00:08:57.062 10009.135 - 10068.713: 82.4502% ( 190) 00:08:57.062 10068.713 - 10128.291: 83.6277% ( 156) 00:08:57.062 10128.291 - 10187.869: 84.7600% ( 150) 00:08:57.062 10187.869 - 10247.447: 85.8016% ( 138) 00:08:57.062 10247.447 - 10307.025: 86.8508% ( 139) 00:08:57.062 10307.025 - 10366.604: 87.8019% ( 126) 00:08:57.062 10366.604 - 10426.182: 88.6549% ( 113) 00:08:57.062 10426.182 - 10485.760: 89.3644% ( 94) 00:08:57.062 10485.760 - 10545.338: 90.0664% ( 93) 00:08:57.062 10545.338 - 10604.916: 90.6476% ( 77) 00:08:57.062 10604.916 - 10664.495: 91.2138% ( 75) 00:08:57.062 10664.495 - 10724.073: 91.7497% ( 71) 00:08:57.062 10724.073 - 10783.651: 92.2328% ( 64) 00:08:57.062 10783.651 - 10843.229: 92.5498% ( 42) 00:08:57.062 10843.229 - 10902.807: 92.9574% ( 54) 00:08:57.062 10902.807 - 10962.385: 93.2896% ( 44) 00:08:57.062 10962.385 - 11021.964: 93.6443% ( 47) 00:08:57.062 11021.964 - 11081.542: 93.9312% ( 38) 00:08:57.062 11081.542 - 11141.120: 94.1803% ( 33) 00:08:57.062 11141.120 - 11200.698: 94.4520% ( 36) 00:08:57.062 11200.698 - 11260.276: 94.6633% ( 28) 00:08:57.062 11260.276 - 11319.855: 94.8521% ( 25) 00:08:57.062 11319.855 - 11379.433: 94.9955% ( 19) 00:08:57.062 11379.433 - 11439.011: 95.1011% ( 14) 00:08:57.062 11439.011 - 11498.589: 95.1842% ( 11) 00:08:57.062 11498.589 - 11558.167: 95.2899% ( 14) 00:08:57.062 11558.167 - 11617.745: 95.3578% ( 9) 00:08:57.062 11617.745 - 11677.324: 95.4408% ( 11) 00:08:57.062 11677.324 - 11736.902: 95.5465% ( 14) 00:08:57.062 11736.902 - 11796.480: 95.6824% ( 18) 00:08:57.062 11796.480 - 11856.058: 95.7201% ( 5) 00:08:57.062 11856.058 - 11915.636: 95.8107% ( 12) 00:08:57.062 11915.636 - 11975.215: 95.8937% ( 11) 00:08:57.062 11975.215 - 12034.793: 95.9918% ( 13) 00:08:57.062 12034.793 - 12094.371: 96.0900% ( 13) 00:08:57.062 12094.371 - 12153.949: 96.1957% ( 14) 00:08:57.062 12153.949 - 12213.527: 96.2409% ( 6) 00:08:57.062 12213.527 - 12273.105: 96.2636% ( 3) 00:08:57.062 12273.105 - 12332.684: 96.3315% ( 9) 00:08:57.062 12332.684 - 12392.262: 96.3768% ( 6) 00:08:57.062 12392.262 - 12451.840: 96.4296% ( 7) 00:08:57.062 12451.840 - 12511.418: 96.4900% ( 8) 00:08:57.062 12511.418 - 12570.996: 96.5504% ( 8) 00:08:57.062 12570.996 - 12630.575: 96.6108% ( 8) 00:08:57.062 12630.575 - 12690.153: 96.6712% ( 8) 00:08:57.062 12690.153 - 12749.731: 96.7844% ( 15) 00:08:57.062 12749.731 - 12809.309: 96.9052% ( 16) 00:08:57.062 12809.309 - 12868.887: 96.9807% ( 10) 00:08:57.062 12868.887 - 12928.465: 97.0713% ( 12) 00:08:57.062 12928.465 - 12988.044: 97.1467% ( 10) 00:08:57.062 12988.044 - 13047.622: 97.2071% ( 8) 00:08:57.062 13047.622 - 13107.200: 97.2977% ( 12) 00:08:57.062 13107.200 - 13166.778: 97.3581% ( 8) 00:08:57.062 13166.778 - 13226.356: 97.4260% ( 9) 00:08:57.062 13226.356 - 13285.935: 97.4713% ( 6) 00:08:57.062 13285.935 - 13345.513: 97.5317% ( 8) 00:08:57.062 13345.513 - 13405.091: 97.5845% ( 7) 00:08:57.062 13405.091 - 13464.669: 97.7053% ( 16) 00:08:57.062 13464.669 - 13524.247: 97.7959% ( 12) 00:08:57.062 13524.247 - 13583.825: 97.8487% ( 7) 00:08:57.062 13583.825 - 13643.404: 97.8940% ( 6) 00:08:57.062 13643.404 - 13702.982: 97.9318% ( 5) 00:08:57.062 13702.982 - 13762.560: 97.9620% ( 4) 00:08:57.062 13762.560 - 13822.138: 98.0374% ( 10) 00:08:57.062 13822.138 - 13881.716: 98.1205% ( 11) 00:08:57.062 13881.716 - 13941.295: 98.2035% ( 11) 00:08:57.062 13941.295 - 14000.873: 98.2639% ( 8) 00:08:57.062 14000.873 - 14060.451: 98.3092% ( 6) 00:08:57.062 14060.451 - 14120.029: 98.3696% ( 8) 00:08:57.062 14120.029 - 14179.607: 98.4224% ( 7) 00:08:57.062 14179.607 - 14239.185: 98.4752% ( 7) 00:08:57.062 14239.185 - 14298.764: 98.5205% ( 6) 00:08:57.062 14298.764 - 14358.342: 98.5809% ( 8) 00:08:57.062 14358.342 - 14417.920: 98.6413% ( 8) 00:08:57.062 14417.920 - 14477.498: 98.6941% ( 7) 00:08:57.062 14477.498 - 14537.076: 98.7243% ( 4) 00:08:57.062 14537.076 - 14596.655: 98.7847% ( 8) 00:08:57.062 14596.655 - 14656.233: 98.8074% ( 3) 00:08:57.062 14656.233 - 14715.811: 98.8678% ( 8) 00:08:57.062 14715.811 - 14775.389: 98.8979% ( 4) 00:08:57.062 14775.389 - 14834.967: 98.9281% ( 4) 00:08:57.062 14834.967 - 14894.545: 98.9508% ( 3) 00:08:57.062 14894.545 - 14954.124: 98.9734% ( 3) 00:08:57.062 14954.124 - 15013.702: 99.0036% ( 4) 00:08:57.062 15013.702 - 15073.280: 99.0338% ( 4) 00:08:57.062 24665.367 - 24784.524: 99.0640% ( 4) 00:08:57.062 24784.524 - 24903.680: 99.0867% ( 3) 00:08:57.062 24903.680 - 25022.836: 99.1168% ( 4) 00:08:57.062 25022.836 - 25141.993: 99.1395% ( 3) 00:08:57.062 25141.993 - 25261.149: 99.1772% ( 5) 00:08:57.062 25261.149 - 25380.305: 99.2074% ( 4) 00:08:57.062 25380.305 - 25499.462: 99.2376% ( 4) 00:08:57.062 25499.462 - 25618.618: 99.2678% ( 4) 00:08:57.062 25618.618 - 25737.775: 99.2754% ( 1) 00:08:57.062 26095.244 - 26214.400: 99.2829% ( 1) 00:08:57.062 26214.400 - 26333.556: 99.3207% ( 5) 00:08:57.062 26333.556 - 26452.713: 99.3508% ( 4) 00:08:57.062 26452.713 - 26571.869: 99.3810% ( 4) 00:08:57.062 26571.869 - 26691.025: 99.4112% ( 4) 00:08:57.062 26691.025 - 26810.182: 99.4414% ( 4) 00:08:57.062 26810.182 - 26929.338: 99.4792% ( 5) 00:08:57.062 26929.338 - 27048.495: 99.5094% ( 4) 00:08:57.062 27048.495 - 27167.651: 99.5169% ( 1) 00:08:57.062 34078.720 - 34317.033: 99.5622% ( 6) 00:08:57.062 34317.033 - 34555.345: 99.6075% ( 6) 00:08:57.062 34555.345 - 34793.658: 99.6603% ( 7) 00:08:57.062 34793.658 - 35031.971: 99.7132% ( 7) 00:08:57.062 35031.971 - 35270.284: 99.7585% ( 6) 00:08:57.062 35270.284 - 35508.596: 99.8113% ( 7) 00:08:57.062 35508.596 - 35746.909: 99.8566% ( 6) 00:08:57.062 35746.909 - 35985.222: 99.9019% ( 6) 00:08:57.062 35985.222 - 36223.535: 99.9396% ( 5) 00:08:57.062 36223.535 - 36461.847: 99.9925% ( 7) 00:08:57.062 36461.847 - 36700.160: 100.0000% ( 1) 00:08:57.062 00:08:57.062 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:57.062 ============================================================================== 00:08:57.062 Range in us Cumulative IO count 00:08:57.062 7983.476 - 8043.055: 0.0981% ( 13) 00:08:57.062 8043.055 - 8102.633: 0.2340% ( 18) 00:08:57.062 8102.633 - 8162.211: 0.4076% ( 23) 00:08:57.062 8162.211 - 8221.789: 0.5737% ( 22) 00:08:57.062 8221.789 - 8281.367: 0.9360% ( 48) 00:08:57.062 8281.367 - 8340.945: 1.4342% ( 66) 00:08:57.062 8340.945 - 8400.524: 2.2192% ( 104) 00:08:57.062 8400.524 - 8460.102: 3.1175% ( 119) 00:08:57.062 8460.102 - 8519.680: 4.3101% ( 158) 00:08:57.062 8519.680 - 8579.258: 5.8348% ( 202) 00:08:57.062 8579.258 - 8638.836: 7.8804% ( 271) 00:08:57.062 8638.836 - 8698.415: 10.3865% ( 332) 00:08:57.062 8698.415 - 8757.993: 13.5870% ( 424) 00:08:57.062 8757.993 - 8817.571: 17.2101% ( 480) 00:08:57.062 8817.571 - 8877.149: 21.1202% ( 518) 00:08:57.062 8877.149 - 8936.727: 25.2868% ( 552) 00:08:57.062 8936.727 - 8996.305: 29.6422% ( 577) 00:08:57.062 8996.305 - 9055.884: 34.3750% ( 627) 00:08:57.063 9055.884 - 9115.462: 38.8210% ( 589) 00:08:57.063 9115.462 - 9175.040: 43.3726% ( 603) 00:08:57.063 9175.040 - 9234.618: 47.7657% ( 582) 00:08:57.063 9234.618 - 9294.196: 51.6380% ( 513) 00:08:57.063 9294.196 - 9353.775: 55.4650% ( 507) 00:08:57.063 9353.775 - 9413.353: 59.1259% ( 485) 00:08:57.063 9413.353 - 9472.931: 62.2811% ( 418) 00:08:57.063 9472.931 - 9532.509: 65.5948% ( 439) 00:08:57.063 9532.509 - 9592.087: 69.0670% ( 460) 00:08:57.063 9592.087 - 9651.665: 71.7089% ( 350) 00:08:57.063 9651.665 - 9711.244: 74.1848% ( 328) 00:08:57.063 9711.244 - 9770.822: 76.1775% ( 264) 00:08:57.063 9770.822 - 9830.400: 78.1854% ( 266) 00:08:57.063 9830.400 - 9889.978: 79.7554% ( 208) 00:08:57.063 9889.978 - 9949.556: 81.0386% ( 170) 00:08:57.063 9949.556 - 10009.135: 82.4653% ( 189) 00:08:57.063 10009.135 - 10068.713: 83.6655% ( 159) 00:08:57.063 10068.713 - 10128.291: 85.0242% ( 180) 00:08:57.063 10128.291 - 10187.869: 86.2017% ( 156) 00:08:57.063 10187.869 - 10247.447: 87.0999% ( 119) 00:08:57.063 10247.447 - 10307.025: 87.9982% ( 119) 00:08:57.063 10307.025 - 10366.604: 88.9719% ( 129) 00:08:57.063 10366.604 - 10426.182: 89.7569% ( 104) 00:08:57.063 10426.182 - 10485.760: 90.5571% ( 106) 00:08:57.063 10485.760 - 10545.338: 91.0854% ( 70) 00:08:57.063 10545.338 - 10604.916: 91.5383% ( 60) 00:08:57.063 10604.916 - 10664.495: 92.0818% ( 72) 00:08:57.063 10664.495 - 10724.073: 92.4366% ( 47) 00:08:57.063 10724.073 - 10783.651: 92.7536% ( 42) 00:08:57.063 10783.651 - 10843.229: 93.0254% ( 36) 00:08:57.063 10843.229 - 10902.807: 93.2971% ( 36) 00:08:57.063 10902.807 - 10962.385: 93.5839% ( 38) 00:08:57.063 10962.385 - 11021.964: 93.9463% ( 48) 00:08:57.063 11021.964 - 11081.542: 94.2029% ( 34) 00:08:57.063 11081.542 - 11141.120: 94.4369% ( 31) 00:08:57.063 11141.120 - 11200.698: 94.6256% ( 25) 00:08:57.063 11200.698 - 11260.276: 94.7539% ( 17) 00:08:57.063 11260.276 - 11319.855: 94.8521% ( 13) 00:08:57.063 11319.855 - 11379.433: 94.9502% ( 13) 00:08:57.063 11379.433 - 11439.011: 95.0332% ( 11) 00:08:57.063 11439.011 - 11498.589: 95.1011% ( 9) 00:08:57.063 11498.589 - 11558.167: 95.1691% ( 9) 00:08:57.063 11558.167 - 11617.745: 95.2446% ( 10) 00:08:57.063 11617.745 - 11677.324: 95.3050% ( 8) 00:08:57.063 11677.324 - 11736.902: 95.3955% ( 12) 00:08:57.063 11736.902 - 11796.480: 95.4786% ( 11) 00:08:57.063 11796.480 - 11856.058: 95.5691% ( 12) 00:08:57.063 11856.058 - 11915.636: 95.6975% ( 17) 00:08:57.063 11915.636 - 11975.215: 95.8484% ( 20) 00:08:57.063 11975.215 - 12034.793: 95.9994% ( 20) 00:08:57.063 12034.793 - 12094.371: 96.1881% ( 25) 00:08:57.063 12094.371 - 12153.949: 96.3466% ( 21) 00:08:57.063 12153.949 - 12213.527: 96.6184% ( 36) 00:08:57.063 12213.527 - 12273.105: 96.7920% ( 23) 00:08:57.063 12273.105 - 12332.684: 96.8901% ( 13) 00:08:57.063 12332.684 - 12392.262: 96.9882% ( 13) 00:08:57.063 12392.262 - 12451.840: 97.0864% ( 13) 00:08:57.063 12451.840 - 12511.418: 97.1467% ( 8) 00:08:57.063 12511.418 - 12570.996: 97.2298% ( 11) 00:08:57.063 12570.996 - 12630.575: 97.2977% ( 9) 00:08:57.063 12630.575 - 12690.153: 97.3505% ( 7) 00:08:57.063 12690.153 - 12749.731: 97.3883% ( 5) 00:08:57.063 12749.731 - 12809.309: 97.4109% ( 3) 00:08:57.063 12809.309 - 12868.887: 97.4789% ( 9) 00:08:57.063 12868.887 - 12928.465: 97.5543% ( 10) 00:08:57.063 12928.465 - 12988.044: 97.5996% ( 6) 00:08:57.063 12988.044 - 13047.622: 97.6298% ( 4) 00:08:57.063 13047.622 - 13107.200: 97.6751% ( 6) 00:08:57.063 13107.200 - 13166.778: 97.7129% ( 5) 00:08:57.063 13166.778 - 13226.356: 97.7582% ( 6) 00:08:57.063 13226.356 - 13285.935: 97.8110% ( 7) 00:08:57.063 13285.935 - 13345.513: 97.8412% ( 4) 00:08:57.063 13345.513 - 13405.091: 97.8638% ( 3) 00:08:57.063 13405.091 - 13464.669: 97.8865% ( 3) 00:08:57.063 13464.669 - 13524.247: 97.9242% ( 5) 00:08:57.063 13524.247 - 13583.825: 97.9469% ( 3) 00:08:57.063 13583.825 - 13643.404: 97.9620% ( 2) 00:08:57.063 13643.404 - 13702.982: 98.0148% ( 7) 00:08:57.063 13702.982 - 13762.560: 98.0903% ( 10) 00:08:57.063 13762.560 - 13822.138: 98.1507% ( 8) 00:08:57.063 13822.138 - 13881.716: 98.2186% ( 9) 00:08:57.063 13881.716 - 13941.295: 98.2714% ( 7) 00:08:57.063 13941.295 - 14000.873: 98.3167% ( 6) 00:08:57.063 14000.873 - 14060.451: 98.3998% ( 11) 00:08:57.063 14060.451 - 14120.029: 98.4300% ( 4) 00:08:57.063 14120.029 - 14179.607: 98.4450% ( 2) 00:08:57.063 14179.607 - 14239.185: 98.4677% ( 3) 00:08:57.063 14239.185 - 14298.764: 98.5054% ( 5) 00:08:57.063 14298.764 - 14358.342: 98.5809% ( 10) 00:08:57.063 14358.342 - 14417.920: 98.6338% ( 7) 00:08:57.063 14417.920 - 14477.498: 98.7017% ( 9) 00:08:57.063 14477.498 - 14537.076: 98.7243% ( 3) 00:08:57.063 14537.076 - 14596.655: 98.7470% ( 3) 00:08:57.063 14596.655 - 14656.233: 98.7696% ( 3) 00:08:57.063 14656.233 - 14715.811: 98.7923% ( 3) 00:08:57.063 14715.811 - 14775.389: 98.8074% ( 2) 00:08:57.063 14775.389 - 14834.967: 98.8376% ( 4) 00:08:57.063 14834.967 - 14894.545: 98.8602% ( 3) 00:08:57.063 14894.545 - 14954.124: 98.8904% ( 4) 00:08:57.063 14954.124 - 15013.702: 98.9130% ( 3) 00:08:57.063 15013.702 - 15073.280: 98.9432% ( 4) 00:08:57.063 15073.280 - 15132.858: 98.9734% ( 4) 00:08:57.063 15132.858 - 15192.436: 99.0036% ( 4) 00:08:57.063 15192.436 - 15252.015: 99.0338% ( 4) 00:08:57.063 24546.211 - 24665.367: 99.0489% ( 2) 00:08:57.063 24665.367 - 24784.524: 99.0791% ( 4) 00:08:57.063 24784.524 - 24903.680: 99.1093% ( 4) 00:08:57.063 24903.680 - 25022.836: 99.1244% ( 2) 00:08:57.063 25022.836 - 25141.993: 99.1546% ( 4) 00:08:57.063 25141.993 - 25261.149: 99.1848% ( 4) 00:08:57.063 25261.149 - 25380.305: 99.2074% ( 3) 00:08:57.063 25380.305 - 25499.462: 99.2376% ( 4) 00:08:57.063 25499.462 - 25618.618: 99.2678% ( 4) 00:08:57.063 25618.618 - 25737.775: 99.2905% ( 3) 00:08:57.063 25737.775 - 25856.931: 99.3207% ( 4) 00:08:57.063 25856.931 - 25976.087: 99.3433% ( 3) 00:08:57.063 25976.087 - 26095.244: 99.3735% ( 4) 00:08:57.063 26095.244 - 26214.400: 99.3961% ( 3) 00:08:57.063 26214.400 - 26333.556: 99.4188% ( 3) 00:08:57.063 26333.556 - 26452.713: 99.4490% ( 4) 00:08:57.063 26452.713 - 26571.869: 99.4716% ( 3) 00:08:57.063 26571.869 - 26691.025: 99.5018% ( 4) 00:08:57.063 26691.025 - 26810.182: 99.5169% ( 2) 00:08:57.063 32887.156 - 33125.469: 99.5697% ( 7) 00:08:57.063 33125.469 - 33363.782: 99.6226% ( 7) 00:08:57.063 33363.782 - 33602.095: 99.6754% ( 7) 00:08:57.063 33602.095 - 33840.407: 99.7283% ( 7) 00:08:57.063 33840.407 - 34078.720: 99.7811% ( 7) 00:08:57.063 34078.720 - 34317.033: 99.8264% ( 6) 00:08:57.063 34317.033 - 34555.345: 99.8792% ( 7) 00:08:57.063 34555.345 - 34793.658: 99.9245% ( 6) 00:08:57.063 34793.658 - 35031.971: 99.9849% ( 8) 00:08:57.063 35031.971 - 35270.284: 100.0000% ( 2) 00:08:57.063 00:08:57.063 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:57.063 ============================================================================== 00:08:57.063 Range in us Cumulative IO count 00:08:57.063 8043.055 - 8102.633: 0.1208% ( 16) 00:08:57.063 8102.633 - 8162.211: 0.3548% ( 31) 00:08:57.063 8162.211 - 8221.789: 0.6341% ( 37) 00:08:57.063 8221.789 - 8281.367: 0.9435% ( 41) 00:08:57.063 8281.367 - 8340.945: 1.4568% ( 68) 00:08:57.063 8340.945 - 8400.524: 2.1966% ( 98) 00:08:57.063 8400.524 - 8460.102: 3.0722% ( 116) 00:08:57.063 8460.102 - 8519.680: 4.4082% ( 177) 00:08:57.063 8519.680 - 8579.258: 6.2651% ( 246) 00:08:57.063 8579.258 - 8638.836: 8.5749% ( 306) 00:08:57.063 8638.836 - 8698.415: 11.1715% ( 344) 00:08:57.063 8698.415 - 8757.993: 14.2361% ( 406) 00:08:57.063 8757.993 - 8817.571: 17.9876% ( 497) 00:08:57.063 8817.571 - 8877.149: 21.9354% ( 523) 00:08:57.063 8877.149 - 8936.727: 26.1473% ( 558) 00:08:57.063 8936.727 - 8996.305: 30.4423% ( 569) 00:08:57.063 8996.305 - 9055.884: 34.8505% ( 584) 00:08:57.063 9055.884 - 9115.462: 39.3418% ( 595) 00:08:57.063 9115.462 - 9175.040: 43.4103% ( 539) 00:08:57.063 9175.040 - 9234.618: 47.2449% ( 508) 00:08:57.063 9234.618 - 9294.196: 51.1473% ( 517) 00:08:57.063 9294.196 - 9353.775: 54.8460% ( 490) 00:08:57.063 9353.775 - 9413.353: 58.3031% ( 458) 00:08:57.063 9413.353 - 9472.931: 61.6093% ( 438) 00:08:57.063 9472.931 - 9532.509: 64.7796% ( 420) 00:08:57.063 9532.509 - 9592.087: 68.1763% ( 450) 00:08:57.063 9592.087 - 9651.665: 70.9013% ( 361) 00:08:57.063 9651.665 - 9711.244: 73.3167% ( 320) 00:08:57.063 9711.244 - 9770.822: 75.4152% ( 278) 00:08:57.063 9770.822 - 9830.400: 77.3777% ( 260) 00:08:57.063 9830.400 - 9889.978: 78.9931% ( 214) 00:08:57.063 9889.978 - 9949.556: 80.4574% ( 194) 00:08:57.063 9949.556 - 10009.135: 81.8463% ( 184) 00:08:57.063 10009.135 - 10068.713: 83.1144% ( 168) 00:08:57.063 10068.713 - 10128.291: 84.3675% ( 166) 00:08:57.063 10128.291 - 10187.869: 85.3336% ( 128) 00:08:57.063 10187.869 - 10247.447: 86.3829% ( 139) 00:08:57.063 10247.447 - 10307.025: 87.2660% ( 117) 00:08:57.063 10307.025 - 10366.604: 88.2775% ( 134) 00:08:57.063 10366.604 - 10426.182: 88.9946% ( 95) 00:08:57.063 10426.182 - 10485.760: 89.9909% ( 132) 00:08:57.063 10485.760 - 10545.338: 90.6778% ( 91) 00:08:57.063 10545.338 - 10604.916: 91.1685% ( 65) 00:08:57.063 10604.916 - 10664.495: 91.7497% ( 77) 00:08:57.063 10664.495 - 10724.073: 92.3536% ( 80) 00:08:57.063 10724.073 - 10783.651: 92.8140% ( 61) 00:08:57.064 10783.651 - 10843.229: 93.1688% ( 47) 00:08:57.064 10843.229 - 10902.807: 93.5236% ( 47) 00:08:57.064 10902.807 - 10962.385: 93.8632% ( 45) 00:08:57.064 10962.385 - 11021.964: 94.1576% ( 39) 00:08:57.064 11021.964 - 11081.542: 94.3765% ( 29) 00:08:57.064 11081.542 - 11141.120: 94.5879% ( 28) 00:08:57.064 11141.120 - 11200.698: 94.7690% ( 24) 00:08:57.064 11200.698 - 11260.276: 94.9275% ( 21) 00:08:57.064 11260.276 - 11319.855: 95.0483% ( 16) 00:08:57.064 11319.855 - 11379.433: 95.1238% ( 10) 00:08:57.064 11379.433 - 11439.011: 95.2219% ( 13) 00:08:57.064 11439.011 - 11498.589: 95.3578% ( 18) 00:08:57.064 11498.589 - 11558.167: 95.5540% ( 26) 00:08:57.064 11558.167 - 11617.745: 95.8484% ( 39) 00:08:57.064 11617.745 - 11677.324: 96.1579% ( 41) 00:08:57.064 11677.324 - 11736.902: 96.4372% ( 37) 00:08:57.064 11736.902 - 11796.480: 96.6335% ( 26) 00:08:57.064 11796.480 - 11856.058: 96.7467% ( 15) 00:08:57.064 11856.058 - 11915.636: 96.8448% ( 13) 00:08:57.064 11915.636 - 11975.215: 96.9505% ( 14) 00:08:57.064 11975.215 - 12034.793: 97.0184% ( 9) 00:08:57.064 12034.793 - 12094.371: 97.0864% ( 9) 00:08:57.064 12094.371 - 12153.949: 97.1694% ( 11) 00:08:57.064 12153.949 - 12213.527: 97.2373% ( 9) 00:08:57.064 12213.527 - 12273.105: 97.3053% ( 9) 00:08:57.064 12273.105 - 12332.684: 97.4109% ( 14) 00:08:57.064 12332.684 - 12392.262: 97.5242% ( 15) 00:08:57.064 12392.262 - 12451.840: 97.6223% ( 13) 00:08:57.064 12451.840 - 12511.418: 97.7129% ( 12) 00:08:57.064 12511.418 - 12570.996: 97.8034% ( 12) 00:08:57.064 12570.996 - 12630.575: 97.8336% ( 4) 00:08:57.064 12630.575 - 12690.153: 97.8563% ( 3) 00:08:57.064 12690.153 - 12749.731: 97.8714% ( 2) 00:08:57.064 12749.731 - 12809.309: 97.8865% ( 2) 00:08:57.064 12809.309 - 12868.887: 97.9016% ( 2) 00:08:57.064 12868.887 - 12928.465: 97.9242% ( 3) 00:08:57.064 12928.465 - 12988.044: 97.9393% ( 2) 00:08:57.064 12988.044 - 13047.622: 97.9695% ( 4) 00:08:57.064 13047.622 - 13107.200: 97.9921% ( 3) 00:08:57.064 13107.200 - 13166.778: 98.0148% ( 3) 00:08:57.064 13166.778 - 13226.356: 98.0374% ( 3) 00:08:57.064 13226.356 - 13285.935: 98.0601% ( 3) 00:08:57.064 13285.935 - 13345.513: 98.0676% ( 1) 00:08:57.064 13643.404 - 13702.982: 98.0978% ( 4) 00:08:57.064 13702.982 - 13762.560: 98.1733% ( 10) 00:08:57.064 13762.560 - 13822.138: 98.2186% ( 6) 00:08:57.064 13822.138 - 13881.716: 98.2337% ( 2) 00:08:57.064 13881.716 - 13941.295: 98.2563% ( 3) 00:08:57.064 13941.295 - 14000.873: 98.2790% ( 3) 00:08:57.064 14000.873 - 14060.451: 98.3092% ( 4) 00:08:57.064 14060.451 - 14120.029: 98.3318% ( 3) 00:08:57.064 14120.029 - 14179.607: 98.3620% ( 4) 00:08:57.064 14179.607 - 14239.185: 98.3771% ( 2) 00:08:57.064 14239.185 - 14298.764: 98.3998% ( 3) 00:08:57.064 14298.764 - 14358.342: 98.4224% ( 3) 00:08:57.064 14358.342 - 14417.920: 98.4526% ( 4) 00:08:57.064 14417.920 - 14477.498: 98.4828% ( 4) 00:08:57.064 14477.498 - 14537.076: 98.5809% ( 13) 00:08:57.064 14537.076 - 14596.655: 98.6639% ( 11) 00:08:57.064 14596.655 - 14656.233: 98.7394% ( 10) 00:08:57.064 14656.233 - 14715.811: 98.7621% ( 3) 00:08:57.064 14715.811 - 14775.389: 98.7847% ( 3) 00:08:57.064 14775.389 - 14834.967: 98.8074% ( 3) 00:08:57.064 14834.967 - 14894.545: 98.8300% ( 3) 00:08:57.064 14894.545 - 14954.124: 98.8527% ( 3) 00:08:57.064 14954.124 - 15013.702: 98.8753% ( 3) 00:08:57.064 15013.702 - 15073.280: 98.9055% ( 4) 00:08:57.064 15073.280 - 15132.858: 98.9281% ( 3) 00:08:57.064 15132.858 - 15192.436: 98.9508% ( 3) 00:08:57.064 15192.436 - 15252.015: 98.9810% ( 4) 00:08:57.064 15252.015 - 15371.171: 99.0338% ( 7) 00:08:57.064 23831.273 - 23950.429: 99.0489% ( 2) 00:08:57.064 23950.429 - 24069.585: 99.0716% ( 3) 00:08:57.064 24069.585 - 24188.742: 99.0942% ( 3) 00:08:57.064 24188.742 - 24307.898: 99.1168% ( 3) 00:08:57.064 24307.898 - 24427.055: 99.1470% ( 4) 00:08:57.064 24427.055 - 24546.211: 99.1697% ( 3) 00:08:57.064 24546.211 - 24665.367: 99.1923% ( 3) 00:08:57.064 24665.367 - 24784.524: 99.2225% ( 4) 00:08:57.064 24784.524 - 24903.680: 99.2376% ( 2) 00:08:57.064 24903.680 - 25022.836: 99.2678% ( 4) 00:08:57.064 25022.836 - 25141.993: 99.2905% ( 3) 00:08:57.064 25141.993 - 25261.149: 99.3131% ( 3) 00:08:57.064 25261.149 - 25380.305: 99.3357% ( 3) 00:08:57.064 25380.305 - 25499.462: 99.3659% ( 4) 00:08:57.064 25499.462 - 25618.618: 99.3886% ( 3) 00:08:57.064 25618.618 - 25737.775: 99.4188% ( 4) 00:08:57.064 25737.775 - 25856.931: 99.4414% ( 3) 00:08:57.064 25856.931 - 25976.087: 99.4716% ( 4) 00:08:57.064 25976.087 - 26095.244: 99.4943% ( 3) 00:08:57.064 26095.244 - 26214.400: 99.5169% ( 3) 00:08:57.064 32172.218 - 32410.531: 99.5622% ( 6) 00:08:57.064 32410.531 - 32648.844: 99.6301% ( 9) 00:08:57.064 32648.844 - 32887.156: 99.6905% ( 8) 00:08:57.064 32887.156 - 33125.469: 99.7509% ( 8) 00:08:57.064 33125.469 - 33363.782: 99.8188% ( 9) 00:08:57.064 33363.782 - 33602.095: 99.8792% ( 8) 00:08:57.064 33602.095 - 33840.407: 99.9547% ( 10) 00:08:57.064 33840.407 - 34078.720: 100.0000% ( 6) 00:08:57.064 00:08:57.064 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:57.064 ============================================================================== 00:08:57.064 Range in us Cumulative IO count 00:08:57.064 7923.898 - 7983.476: 0.0302% ( 4) 00:08:57.064 7983.476 - 8043.055: 0.1057% ( 10) 00:08:57.064 8043.055 - 8102.633: 0.2038% ( 13) 00:08:57.064 8102.633 - 8162.211: 0.3623% ( 21) 00:08:57.064 8162.211 - 8221.789: 0.6039% ( 32) 00:08:57.064 8221.789 - 8281.367: 0.9284% ( 43) 00:08:57.064 8281.367 - 8340.945: 1.3587% ( 57) 00:08:57.064 8340.945 - 8400.524: 1.9701% ( 81) 00:08:57.064 8400.524 - 8460.102: 2.7476% ( 103) 00:08:57.064 8460.102 - 8519.680: 3.8345% ( 144) 00:08:57.064 8519.680 - 8579.258: 5.5405% ( 226) 00:08:57.064 8579.258 - 8638.836: 7.7144% ( 288) 00:08:57.064 8638.836 - 8698.415: 10.3412% ( 348) 00:08:57.064 8698.415 - 8757.993: 13.5341% ( 423) 00:08:57.064 8757.993 - 8817.571: 17.2705% ( 495) 00:08:57.064 8817.571 - 8877.149: 21.4447% ( 553) 00:08:57.064 8877.149 - 8936.727: 26.0870% ( 615) 00:08:57.064 8936.727 - 8996.305: 30.5329% ( 589) 00:08:57.064 8996.305 - 9055.884: 35.1600% ( 613) 00:08:57.064 9055.884 - 9115.462: 39.4248% ( 565) 00:08:57.064 9115.462 - 9175.040: 43.4934% ( 539) 00:08:57.064 9175.040 - 9234.618: 47.4638% ( 526) 00:08:57.064 9234.618 - 9294.196: 51.4040% ( 522) 00:08:57.064 9294.196 - 9353.775: 55.1178% ( 492) 00:08:57.064 9353.775 - 9413.353: 58.5824% ( 459) 00:08:57.064 9413.353 - 9472.931: 61.7452% ( 419) 00:08:57.064 9472.931 - 9532.509: 65.1646% ( 453) 00:08:57.064 9532.509 - 9592.087: 68.1461% ( 395) 00:08:57.064 9592.087 - 9651.665: 70.9541% ( 372) 00:08:57.064 9651.665 - 9711.244: 73.1733% ( 294) 00:08:57.064 9711.244 - 9770.822: 75.4001% ( 295) 00:08:57.064 9770.822 - 9830.400: 77.2569% ( 246) 00:08:57.064 9830.400 - 9889.978: 78.9780% ( 228) 00:08:57.064 9889.978 - 9949.556: 80.5480% ( 208) 00:08:57.064 9949.556 - 10009.135: 81.9520% ( 186) 00:08:57.064 10009.135 - 10068.713: 83.2579% ( 173) 00:08:57.064 10068.713 - 10128.291: 84.4656% ( 160) 00:08:57.064 10128.291 - 10187.869: 85.6507% ( 157) 00:08:57.064 10187.869 - 10247.447: 86.5942% ( 125) 00:08:57.064 10247.447 - 10307.025: 87.5830% ( 131) 00:08:57.064 10307.025 - 10366.604: 88.4435% ( 114) 00:08:57.064 10366.604 - 10426.182: 89.3418% ( 119) 00:08:57.064 10426.182 - 10485.760: 90.1042% ( 101) 00:08:57.064 10485.760 - 10545.338: 90.8062% ( 93) 00:08:57.064 10545.338 - 10604.916: 91.4478% ( 85) 00:08:57.064 10604.916 - 10664.495: 92.0063% ( 74) 00:08:57.064 10664.495 - 10724.073: 92.5876% ( 77) 00:08:57.064 10724.073 - 10783.651: 93.1763% ( 78) 00:08:57.064 10783.651 - 10843.229: 93.5839% ( 54) 00:08:57.064 10843.229 - 10902.807: 93.8783% ( 39) 00:08:57.064 10902.807 - 10962.385: 94.1954% ( 42) 00:08:57.064 10962.385 - 11021.964: 94.4067% ( 28) 00:08:57.064 11021.964 - 11081.542: 94.6030% ( 26) 00:08:57.064 11081.542 - 11141.120: 94.7690% ( 22) 00:08:57.064 11141.120 - 11200.698: 94.9653% ( 26) 00:08:57.064 11200.698 - 11260.276: 95.2295% ( 35) 00:08:57.064 11260.276 - 11319.855: 95.4257% ( 26) 00:08:57.064 11319.855 - 11379.433: 95.5993% ( 23) 00:08:57.064 11379.433 - 11439.011: 95.6975% ( 13) 00:08:57.064 11439.011 - 11498.589: 95.8031% ( 14) 00:08:57.064 11498.589 - 11558.167: 95.9164% ( 15) 00:08:57.064 11558.167 - 11617.745: 96.0069% ( 12) 00:08:57.064 11617.745 - 11677.324: 96.1126% ( 14) 00:08:57.064 11677.324 - 11736.902: 96.2711% ( 21) 00:08:57.064 11736.902 - 11796.480: 96.4296% ( 21) 00:08:57.064 11796.480 - 11856.058: 96.6033% ( 23) 00:08:57.064 11856.058 - 11915.636: 96.7618% ( 21) 00:08:57.064 11915.636 - 11975.215: 96.8750% ( 15) 00:08:57.064 11975.215 - 12034.793: 96.9807% ( 14) 00:08:57.064 12034.793 - 12094.371: 97.0788% ( 13) 00:08:57.064 12094.371 - 12153.949: 97.1467% ( 9) 00:08:57.064 12153.949 - 12213.527: 97.1920% ( 6) 00:08:57.064 12213.527 - 12273.105: 97.2600% ( 9) 00:08:57.064 12273.105 - 12332.684: 97.3053% ( 6) 00:08:57.064 12332.684 - 12392.262: 97.3581% ( 7) 00:08:57.064 12392.262 - 12451.840: 97.4336% ( 10) 00:08:57.064 12451.840 - 12511.418: 97.5317% ( 13) 00:08:57.064 12511.418 - 12570.996: 97.6147% ( 11) 00:08:57.064 12570.996 - 12630.575: 97.6751% ( 8) 00:08:57.064 12630.575 - 12690.153: 97.7506% ( 10) 00:08:57.064 12690.153 - 12749.731: 97.8110% ( 8) 00:08:57.064 12749.731 - 12809.309: 97.8487% ( 5) 00:08:57.065 12809.309 - 12868.887: 97.9016% ( 7) 00:08:57.065 12868.887 - 12928.465: 97.9242% ( 3) 00:08:57.065 12928.465 - 12988.044: 97.9393% ( 2) 00:08:57.065 12988.044 - 13047.622: 97.9544% ( 2) 00:08:57.065 13047.622 - 13107.200: 97.9771% ( 3) 00:08:57.065 13107.200 - 13166.778: 97.9997% ( 3) 00:08:57.065 13166.778 - 13226.356: 98.0148% ( 2) 00:08:57.065 13226.356 - 13285.935: 98.0299% ( 2) 00:08:57.065 13285.935 - 13345.513: 98.0525% ( 3) 00:08:57.065 13345.513 - 13405.091: 98.0676% ( 2) 00:08:57.065 13822.138 - 13881.716: 98.0903% ( 3) 00:08:57.065 13881.716 - 13941.295: 98.1356% ( 6) 00:08:57.065 13941.295 - 14000.873: 98.1884% ( 7) 00:08:57.065 14000.873 - 14060.451: 98.2412% ( 7) 00:08:57.065 14060.451 - 14120.029: 98.2790% ( 5) 00:08:57.065 14120.029 - 14179.607: 98.3545% ( 10) 00:08:57.065 14179.607 - 14239.185: 98.4149% ( 8) 00:08:57.065 14239.185 - 14298.764: 98.4375% ( 3) 00:08:57.065 14298.764 - 14358.342: 98.4601% ( 3) 00:08:57.065 14358.342 - 14417.920: 98.4903% ( 4) 00:08:57.065 14417.920 - 14477.498: 98.5054% ( 2) 00:08:57.065 14477.498 - 14537.076: 98.5507% ( 6) 00:08:57.065 14537.076 - 14596.655: 98.6036% ( 7) 00:08:57.065 14596.655 - 14656.233: 98.6413% ( 5) 00:08:57.065 14656.233 - 14715.811: 98.6941% ( 7) 00:08:57.065 14715.811 - 14775.389: 98.7319% ( 5) 00:08:57.065 14775.389 - 14834.967: 98.7772% ( 6) 00:08:57.065 14834.967 - 14894.545: 98.8451% ( 9) 00:08:57.065 14894.545 - 14954.124: 98.8829% ( 5) 00:08:57.065 14954.124 - 15013.702: 98.9055% ( 3) 00:08:57.065 15013.702 - 15073.280: 98.9281% ( 3) 00:08:57.065 15073.280 - 15132.858: 98.9583% ( 4) 00:08:57.065 15132.858 - 15192.436: 98.9810% ( 3) 00:08:57.065 15192.436 - 15252.015: 99.0036% ( 3) 00:08:57.065 15252.015 - 15371.171: 99.0338% ( 4) 00:08:57.065 22639.709 - 22758.865: 99.0565% ( 3) 00:08:57.065 22758.865 - 22878.022: 99.0867% ( 4) 00:08:57.065 22878.022 - 22997.178: 99.1093% ( 3) 00:08:57.065 22997.178 - 23116.335: 99.1319% ( 3) 00:08:57.065 23116.335 - 23235.491: 99.1621% ( 4) 00:08:57.065 23235.491 - 23354.647: 99.1923% ( 4) 00:08:57.065 23354.647 - 23473.804: 99.2150% ( 3) 00:08:57.065 23473.804 - 23592.960: 99.2452% ( 4) 00:08:57.065 23592.960 - 23712.116: 99.2678% ( 3) 00:08:57.065 23712.116 - 23831.273: 99.2980% ( 4) 00:08:57.065 23831.273 - 23950.429: 99.3282% ( 4) 00:08:57.065 23950.429 - 24069.585: 99.3508% ( 3) 00:08:57.065 24069.585 - 24188.742: 99.3810% ( 4) 00:08:57.065 24188.742 - 24307.898: 99.4037% ( 3) 00:08:57.065 24307.898 - 24427.055: 99.4339% ( 4) 00:08:57.065 24427.055 - 24546.211: 99.4641% ( 4) 00:08:57.065 24546.211 - 24665.367: 99.4943% ( 4) 00:08:57.065 24665.367 - 24784.524: 99.5169% ( 3) 00:08:57.065 30504.029 - 30742.342: 99.5471% ( 4) 00:08:57.065 30742.342 - 30980.655: 99.6075% ( 8) 00:08:57.065 30980.655 - 31218.967: 99.6754% ( 9) 00:08:57.065 31218.967 - 31457.280: 99.7358% ( 8) 00:08:57.065 31457.280 - 31695.593: 99.8113% ( 10) 00:08:57.065 31695.593 - 31933.905: 99.8717% ( 8) 00:08:57.065 31933.905 - 32172.218: 99.9396% ( 9) 00:08:57.065 32172.218 - 32410.531: 99.9925% ( 7) 00:08:57.065 32410.531 - 32648.844: 100.0000% ( 1) 00:08:57.065 00:08:57.065 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:57.065 ============================================================================== 00:08:57.065 Range in us Cumulative IO count 00:08:57.065 7923.898 - 7983.476: 0.0302% ( 4) 00:08:57.065 7983.476 - 8043.055: 0.0906% ( 8) 00:08:57.065 8043.055 - 8102.633: 0.1963% ( 14) 00:08:57.065 8102.633 - 8162.211: 0.3774% ( 24) 00:08:57.065 8162.211 - 8221.789: 0.5963% ( 29) 00:08:57.065 8221.789 - 8281.367: 0.9511% ( 47) 00:08:57.065 8281.367 - 8340.945: 1.4417% ( 65) 00:08:57.065 8340.945 - 8400.524: 2.0380% ( 79) 00:08:57.065 8400.524 - 8460.102: 2.8231% ( 104) 00:08:57.065 8460.102 - 8519.680: 3.8496% ( 136) 00:08:57.065 8519.680 - 8579.258: 5.4952% ( 218) 00:08:57.065 8579.258 - 8638.836: 7.6993% ( 292) 00:08:57.065 8638.836 - 8698.415: 10.3110% ( 346) 00:08:57.065 8698.415 - 8757.993: 13.2246% ( 386) 00:08:57.065 8757.993 - 8817.571: 16.8931% ( 486) 00:08:57.065 8817.571 - 8877.149: 20.7654% ( 513) 00:08:57.065 8877.149 - 8936.727: 25.1812% ( 585) 00:08:57.065 8936.727 - 8996.305: 29.5592% ( 580) 00:08:57.065 8996.305 - 9055.884: 34.0655% ( 597) 00:08:57.065 9055.884 - 9115.462: 38.4435% ( 580) 00:08:57.065 9115.462 - 9175.040: 42.6706% ( 560) 00:08:57.065 9175.040 - 9234.618: 46.8750% ( 557) 00:08:57.065 9234.618 - 9294.196: 51.2606% ( 581) 00:08:57.065 9294.196 - 9353.775: 55.4725% ( 558) 00:08:57.065 9353.775 - 9413.353: 59.1033% ( 481) 00:08:57.065 9413.353 - 9472.931: 62.5302% ( 454) 00:08:57.065 9472.931 - 9532.509: 65.6703% ( 416) 00:08:57.065 9532.509 - 9592.087: 68.6368% ( 393) 00:08:57.065 9592.087 - 9651.665: 71.3466% ( 359) 00:08:57.065 9651.665 - 9711.244: 73.6338% ( 303) 00:08:57.065 9711.244 - 9770.822: 75.8001% ( 287) 00:08:57.065 9770.822 - 9830.400: 77.6796% ( 249) 00:08:57.065 9830.400 - 9889.978: 79.2497% ( 208) 00:08:57.065 9889.978 - 9949.556: 80.9179% ( 221) 00:08:57.065 9949.556 - 10009.135: 82.2841% ( 181) 00:08:57.065 10009.135 - 10068.713: 83.6730% ( 184) 00:08:57.065 10068.713 - 10128.291: 84.8204% ( 152) 00:08:57.065 10128.291 - 10187.869: 85.8092% ( 131) 00:08:57.065 10187.869 - 10247.447: 87.0320% ( 162) 00:08:57.065 10247.447 - 10307.025: 87.9680% ( 124) 00:08:57.065 10307.025 - 10366.604: 88.8587% ( 118) 00:08:57.065 10366.604 - 10426.182: 89.5607% ( 93) 00:08:57.065 10426.182 - 10485.760: 90.4438% ( 117) 00:08:57.065 10485.760 - 10545.338: 91.1232% ( 90) 00:08:57.065 10545.338 - 10604.916: 91.6516% ( 70) 00:08:57.065 10604.916 - 10664.495: 92.1573% ( 67) 00:08:57.065 10664.495 - 10724.073: 92.6857% ( 70) 00:08:57.065 10724.073 - 10783.651: 93.1537% ( 62) 00:08:57.065 10783.651 - 10843.229: 93.5537% ( 53) 00:08:57.065 10843.229 - 10902.807: 94.0142% ( 61) 00:08:57.065 10902.807 - 10962.385: 94.3161% ( 40) 00:08:57.065 10962.385 - 11021.964: 94.6558% ( 45) 00:08:57.065 11021.964 - 11081.542: 94.9124% ( 34) 00:08:57.065 11081.542 - 11141.120: 95.1464% ( 31) 00:08:57.065 11141.120 - 11200.698: 95.3276% ( 24) 00:08:57.065 11200.698 - 11260.276: 95.4937% ( 22) 00:08:57.065 11260.276 - 11319.855: 95.6220% ( 17) 00:08:57.065 11319.855 - 11379.433: 95.7277% ( 14) 00:08:57.065 11379.433 - 11439.011: 95.8107% ( 11) 00:08:57.065 11439.011 - 11498.589: 95.8711% ( 8) 00:08:57.065 11498.589 - 11558.167: 95.9315% ( 8) 00:08:57.065 11558.167 - 11617.745: 96.0069% ( 10) 00:08:57.065 11617.745 - 11677.324: 96.0824% ( 10) 00:08:57.065 11677.324 - 11736.902: 96.1957% ( 15) 00:08:57.065 11736.902 - 11796.480: 96.3013% ( 14) 00:08:57.065 11796.480 - 11856.058: 96.4523% ( 20) 00:08:57.065 11856.058 - 11915.636: 96.5580% ( 14) 00:08:57.065 11915.636 - 11975.215: 96.7240% ( 22) 00:08:57.065 11975.215 - 12034.793: 96.8373% ( 15) 00:08:57.065 12034.793 - 12094.371: 96.9127% ( 10) 00:08:57.065 12094.371 - 12153.949: 96.9958% ( 11) 00:08:57.065 12153.949 - 12213.527: 97.0486% ( 7) 00:08:57.065 12213.527 - 12273.105: 97.1090% ( 8) 00:08:57.065 12273.105 - 12332.684: 97.1694% ( 8) 00:08:57.065 12332.684 - 12392.262: 97.2147% ( 6) 00:08:57.065 12392.262 - 12451.840: 97.2751% ( 8) 00:08:57.065 12451.840 - 12511.418: 97.3430% ( 9) 00:08:57.065 12511.418 - 12570.996: 97.4487% ( 14) 00:08:57.065 12570.996 - 12630.575: 97.5694% ( 16) 00:08:57.065 12630.575 - 12690.153: 97.6676% ( 13) 00:08:57.065 12690.153 - 12749.731: 97.7204% ( 7) 00:08:57.065 12749.731 - 12809.309: 97.7808% ( 8) 00:08:57.065 12809.309 - 12868.887: 97.8412% ( 8) 00:08:57.065 12868.887 - 12928.465: 97.8789% ( 5) 00:08:57.065 12928.465 - 12988.044: 97.8940% ( 2) 00:08:57.065 12988.044 - 13047.622: 97.9167% ( 3) 00:08:57.065 13047.622 - 13107.200: 97.9318% ( 2) 00:08:57.065 13107.200 - 13166.778: 97.9469% ( 2) 00:08:57.065 13166.778 - 13226.356: 97.9695% ( 3) 00:08:57.065 13226.356 - 13285.935: 97.9846% ( 2) 00:08:57.065 13285.935 - 13345.513: 98.0072% ( 3) 00:08:57.065 13345.513 - 13405.091: 98.0223% ( 2) 00:08:57.065 13405.091 - 13464.669: 98.0374% ( 2) 00:08:57.065 13464.669 - 13524.247: 98.0601% ( 3) 00:08:57.065 13524.247 - 13583.825: 98.0676% ( 1) 00:08:57.065 13941.295 - 14000.873: 98.1129% ( 6) 00:08:57.065 14000.873 - 14060.451: 98.1733% ( 8) 00:08:57.065 14060.451 - 14120.029: 98.2261% ( 7) 00:08:57.065 14120.029 - 14179.607: 98.2790% ( 7) 00:08:57.065 14179.607 - 14239.185: 98.3771% ( 13) 00:08:57.065 14239.185 - 14298.764: 98.4149% ( 5) 00:08:57.065 14298.764 - 14358.342: 98.4375% ( 3) 00:08:57.065 14358.342 - 14417.920: 98.4752% ( 5) 00:08:57.065 14417.920 - 14477.498: 98.4979% ( 3) 00:08:57.065 14477.498 - 14537.076: 98.5205% ( 3) 00:08:57.065 14537.076 - 14596.655: 98.6036% ( 11) 00:08:57.065 14596.655 - 14656.233: 98.6564% ( 7) 00:08:57.065 14656.233 - 14715.811: 98.6941% ( 5) 00:08:57.065 14715.811 - 14775.389: 98.7545% ( 8) 00:08:57.065 14775.389 - 14834.967: 98.7998% ( 6) 00:08:57.065 14834.967 - 14894.545: 98.8451% ( 6) 00:08:57.065 14894.545 - 14954.124: 98.8829% ( 5) 00:08:57.065 14954.124 - 15013.702: 98.9055% ( 3) 00:08:57.065 15013.702 - 15073.280: 98.9206% ( 2) 00:08:57.065 15073.280 - 15132.858: 98.9508% ( 4) 00:08:57.065 15132.858 - 15192.436: 98.9659% ( 2) 00:08:57.065 15192.436 - 15252.015: 98.9810% ( 2) 00:08:57.065 15252.015 - 15371.171: 99.0263% ( 6) 00:08:57.065 15371.171 - 15490.327: 99.0338% ( 1) 00:08:57.066 21209.833 - 21328.989: 99.0489% ( 2) 00:08:57.066 21328.989 - 21448.145: 99.0716% ( 3) 00:08:57.066 21448.145 - 21567.302: 99.0942% ( 3) 00:08:57.066 21567.302 - 21686.458: 99.1244% ( 4) 00:08:57.066 21686.458 - 21805.615: 99.1470% ( 3) 00:08:57.066 21805.615 - 21924.771: 99.1772% ( 4) 00:08:57.066 21924.771 - 22043.927: 99.1999% ( 3) 00:08:57.066 22043.927 - 22163.084: 99.2301% ( 4) 00:08:57.066 22163.084 - 22282.240: 99.2527% ( 3) 00:08:57.066 22282.240 - 22401.396: 99.2754% ( 3) 00:08:57.066 22401.396 - 22520.553: 99.3056% ( 4) 00:08:57.066 22520.553 - 22639.709: 99.3282% ( 3) 00:08:57.066 22639.709 - 22758.865: 99.3508% ( 3) 00:08:57.066 22758.865 - 22878.022: 99.3810% ( 4) 00:08:57.066 22878.022 - 22997.178: 99.4037% ( 3) 00:08:57.066 22997.178 - 23116.335: 99.4339% ( 4) 00:08:57.066 23116.335 - 23235.491: 99.4641% ( 4) 00:08:57.066 23235.491 - 23354.647: 99.4943% ( 4) 00:08:57.066 23354.647 - 23473.804: 99.5169% ( 3) 00:08:57.066 29074.153 - 29193.309: 99.5546% ( 5) 00:08:57.066 29193.309 - 29312.465: 99.5848% ( 4) 00:08:57.066 29312.465 - 29431.622: 99.6150% ( 4) 00:08:57.066 29431.622 - 29550.778: 99.6377% ( 3) 00:08:57.066 29550.778 - 29669.935: 99.6679% ( 4) 00:08:57.066 29669.935 - 29789.091: 99.6981% ( 4) 00:08:57.066 29789.091 - 29908.247: 99.7283% ( 4) 00:08:57.066 29908.247 - 30027.404: 99.7585% ( 4) 00:08:57.066 30027.404 - 30146.560: 99.7962% ( 5) 00:08:57.066 30146.560 - 30265.716: 99.8264% ( 4) 00:08:57.066 30265.716 - 30384.873: 99.8641% ( 5) 00:08:57.066 30384.873 - 30504.029: 99.8943% ( 4) 00:08:57.066 30504.029 - 30742.342: 99.9623% ( 9) 00:08:57.066 30742.342 - 30980.655: 100.0000% ( 5) 00:08:57.066 00:08:57.066 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:57.066 ============================================================================== 00:08:57.066 Range in us Cumulative IO count 00:08:57.066 7864.320 - 7923.898: 0.0075% ( 1) 00:08:57.066 7983.476 - 8043.055: 0.0679% ( 8) 00:08:57.066 8043.055 - 8102.633: 0.1661% ( 13) 00:08:57.066 8102.633 - 8162.211: 0.3623% ( 26) 00:08:57.066 8162.211 - 8221.789: 0.7095% ( 46) 00:08:57.066 8221.789 - 8281.367: 1.1322% ( 56) 00:08:57.066 8281.367 - 8340.945: 1.6531% ( 69) 00:08:57.066 8340.945 - 8400.524: 2.2796% ( 83) 00:08:57.066 8400.524 - 8460.102: 3.1401% ( 114) 00:08:57.066 8460.102 - 8519.680: 4.2421% ( 146) 00:08:57.066 8519.680 - 8579.258: 5.7745% ( 203) 00:08:57.066 8579.258 - 8638.836: 7.9408% ( 287) 00:08:57.066 8638.836 - 8698.415: 10.3110% ( 314) 00:08:57.066 8698.415 - 8757.993: 13.2624% ( 391) 00:08:57.066 8757.993 - 8817.571: 16.9988% ( 495) 00:08:57.066 8817.571 - 8877.149: 21.3768% ( 580) 00:08:57.066 8877.149 - 8936.727: 25.6190% ( 562) 00:08:57.066 8936.727 - 8996.305: 29.9970% ( 580) 00:08:57.066 8996.305 - 9055.884: 34.3448% ( 576) 00:08:57.066 9055.884 - 9115.462: 38.4511% ( 544) 00:08:57.066 9115.462 - 9175.040: 42.4064% ( 524) 00:08:57.066 9175.040 - 9234.618: 46.3315% ( 520) 00:08:57.066 9234.618 - 9294.196: 50.3397% ( 531) 00:08:57.066 9294.196 - 9353.775: 54.1289% ( 502) 00:08:57.066 9353.775 - 9413.353: 57.7068% ( 474) 00:08:57.066 9413.353 - 9472.931: 61.1941% ( 462) 00:08:57.066 9472.931 - 9532.509: 64.6135% ( 453) 00:08:57.066 9532.509 - 9592.087: 67.8140% ( 424) 00:08:57.066 9592.087 - 9651.665: 70.8786% ( 406) 00:08:57.066 9651.665 - 9711.244: 73.3771% ( 331) 00:08:57.066 9711.244 - 9770.822: 75.4529% ( 275) 00:08:57.066 9770.822 - 9830.400: 77.4306% ( 262) 00:08:57.066 9830.400 - 9889.978: 79.1516% ( 228) 00:08:57.066 9889.978 - 9949.556: 80.9254% ( 235) 00:08:57.066 9949.556 - 10009.135: 82.3219% ( 185) 00:08:57.066 10009.135 - 10068.713: 83.4164% ( 145) 00:08:57.066 10068.713 - 10128.291: 84.4882% ( 142) 00:08:57.066 10128.291 - 10187.869: 85.4695% ( 130) 00:08:57.066 10187.869 - 10247.447: 86.6093% ( 151) 00:08:57.066 10247.447 - 10307.025: 87.6963% ( 144) 00:08:57.066 10307.025 - 10366.604: 88.6171% ( 122) 00:08:57.066 10366.604 - 10426.182: 89.4550% ( 111) 00:08:57.066 10426.182 - 10485.760: 90.3457% ( 118) 00:08:57.066 10485.760 - 10545.338: 91.1458% ( 106) 00:08:57.066 10545.338 - 10604.916: 91.9384% ( 105) 00:08:57.066 10604.916 - 10664.495: 92.5347% ( 79) 00:08:57.066 10664.495 - 10724.073: 92.9952% ( 61) 00:08:57.066 10724.073 - 10783.651: 93.4405% ( 59) 00:08:57.066 10783.651 - 10843.229: 93.7953% ( 47) 00:08:57.066 10843.229 - 10902.807: 94.1274% ( 44) 00:08:57.066 10902.807 - 10962.385: 94.4143% ( 38) 00:08:57.066 10962.385 - 11021.964: 94.7011% ( 38) 00:08:57.066 11021.964 - 11081.542: 94.9728% ( 36) 00:08:57.066 11081.542 - 11141.120: 95.1917% ( 29) 00:08:57.066 11141.120 - 11200.698: 95.3729% ( 24) 00:08:57.066 11200.698 - 11260.276: 95.5465% ( 23) 00:08:57.066 11260.276 - 11319.855: 95.6522% ( 14) 00:08:57.066 11319.855 - 11379.433: 95.7050% ( 7) 00:08:57.066 11379.433 - 11439.011: 95.7654% ( 8) 00:08:57.066 11439.011 - 11498.589: 95.8107% ( 6) 00:08:57.066 11498.589 - 11558.167: 95.8635% ( 7) 00:08:57.066 11558.167 - 11617.745: 95.9088% ( 6) 00:08:57.066 11617.745 - 11677.324: 95.9768% ( 9) 00:08:57.066 11677.324 - 11736.902: 96.0522% ( 10) 00:08:57.066 11736.902 - 11796.480: 96.1277% ( 10) 00:08:57.066 11796.480 - 11856.058: 96.2258% ( 13) 00:08:57.066 11856.058 - 11915.636: 96.3315% ( 14) 00:08:57.066 11915.636 - 11975.215: 96.4598% ( 17) 00:08:57.066 11975.215 - 12034.793: 96.5504% ( 12) 00:08:57.066 12034.793 - 12094.371: 96.6636% ( 15) 00:08:57.066 12094.371 - 12153.949: 96.7920% ( 17) 00:08:57.066 12153.949 - 12213.527: 96.9354% ( 19) 00:08:57.066 12213.527 - 12273.105: 97.0184% ( 11) 00:08:57.066 12273.105 - 12332.684: 97.0864% ( 9) 00:08:57.066 12332.684 - 12392.262: 97.1845% ( 13) 00:08:57.066 12392.262 - 12451.840: 97.2524% ( 9) 00:08:57.066 12451.840 - 12511.418: 97.3053% ( 7) 00:08:57.066 12511.418 - 12570.996: 97.3505% ( 6) 00:08:57.066 12570.996 - 12630.575: 97.4034% ( 7) 00:08:57.066 12630.575 - 12690.153: 97.5015% ( 13) 00:08:57.066 12690.153 - 12749.731: 97.5996% ( 13) 00:08:57.066 12749.731 - 12809.309: 97.6676% ( 9) 00:08:57.066 12809.309 - 12868.887: 97.7280% ( 8) 00:08:57.066 12868.887 - 12928.465: 97.7883% ( 8) 00:08:57.066 12928.465 - 12988.044: 97.8412% ( 7) 00:08:57.066 12988.044 - 13047.622: 97.8714% ( 4) 00:08:57.066 13047.622 - 13107.200: 97.9016% ( 4) 00:08:57.066 13107.200 - 13166.778: 97.9242% ( 3) 00:08:57.066 13166.778 - 13226.356: 97.9469% ( 3) 00:08:57.066 13226.356 - 13285.935: 97.9695% ( 3) 00:08:57.066 13285.935 - 13345.513: 97.9846% ( 2) 00:08:57.066 13345.513 - 13405.091: 98.0601% ( 10) 00:08:57.066 13405.091 - 13464.669: 98.1356% ( 10) 00:08:57.066 13464.669 - 13524.247: 98.2186% ( 11) 00:08:57.066 13524.247 - 13583.825: 98.2714% ( 7) 00:08:57.066 13583.825 - 13643.404: 98.3092% ( 5) 00:08:57.066 13643.404 - 13702.982: 98.3318% ( 3) 00:08:57.066 13702.982 - 13762.560: 98.3545% ( 3) 00:08:57.066 13762.560 - 13822.138: 98.3696% ( 2) 00:08:57.066 13822.138 - 13881.716: 98.3998% ( 4) 00:08:57.066 13881.716 - 13941.295: 98.4300% ( 4) 00:08:57.066 13941.295 - 14000.873: 98.4601% ( 4) 00:08:57.066 14000.873 - 14060.451: 98.5205% ( 8) 00:08:57.066 14060.451 - 14120.029: 98.6111% ( 12) 00:08:57.066 14120.029 - 14179.607: 98.7017% ( 12) 00:08:57.066 14179.607 - 14239.185: 98.7696% ( 9) 00:08:57.066 14239.185 - 14298.764: 98.7998% ( 4) 00:08:57.066 14298.764 - 14358.342: 98.8225% ( 3) 00:08:57.066 14358.342 - 14417.920: 98.8451% ( 3) 00:08:57.066 14417.920 - 14477.498: 98.8678% ( 3) 00:08:57.066 14477.498 - 14537.076: 98.8829% ( 2) 00:08:57.066 14537.076 - 14596.655: 98.9055% ( 3) 00:08:57.067 14596.655 - 14656.233: 98.9281% ( 3) 00:08:57.067 14656.233 - 14715.811: 98.9583% ( 4) 00:08:57.067 14715.811 - 14775.389: 98.9961% ( 5) 00:08:57.067 14775.389 - 14834.967: 99.0263% ( 4) 00:08:57.067 14834.967 - 14894.545: 99.0338% ( 1) 00:08:57.067 20137.425 - 20256.582: 99.0640% ( 4) 00:08:57.067 20256.582 - 20375.738: 99.1018% ( 5) 00:08:57.067 20375.738 - 20494.895: 99.1395% ( 5) 00:08:57.067 20494.895 - 20614.051: 99.1697% ( 4) 00:08:57.067 20614.051 - 20733.207: 99.2150% ( 6) 00:08:57.067 20733.207 - 20852.364: 99.2603% ( 6) 00:08:57.067 20852.364 - 20971.520: 99.2829% ( 3) 00:08:57.067 20971.520 - 21090.676: 99.3056% ( 3) 00:08:57.067 21090.676 - 21209.833: 99.3282% ( 3) 00:08:57.067 21209.833 - 21328.989: 99.3433% ( 2) 00:08:57.067 21328.989 - 21448.145: 99.3659% ( 3) 00:08:57.067 21448.145 - 21567.302: 99.3886% ( 3) 00:08:57.067 21567.302 - 21686.458: 99.4112% ( 3) 00:08:57.067 21686.458 - 21805.615: 99.4414% ( 4) 00:08:57.067 21805.615 - 21924.771: 99.4641% ( 3) 00:08:57.067 21924.771 - 22043.927: 99.4867% ( 3) 00:08:57.067 22043.927 - 22163.084: 99.5169% ( 4) 00:08:57.067 27286.807 - 27405.964: 99.5396% ( 3) 00:08:57.067 27405.964 - 27525.120: 99.5697% ( 4) 00:08:57.067 27525.120 - 27644.276: 99.5999% ( 4) 00:08:57.067 27644.276 - 27763.433: 99.6301% ( 4) 00:08:57.067 27763.433 - 27882.589: 99.6679% ( 5) 00:08:57.067 27882.589 - 28001.745: 99.6981% ( 4) 00:08:57.067 28001.745 - 28120.902: 99.7283% ( 4) 00:08:57.067 28120.902 - 28240.058: 99.7585% ( 4) 00:08:57.067 28240.058 - 28359.215: 99.7962% ( 5) 00:08:57.067 28359.215 - 28478.371: 99.8264% ( 4) 00:08:57.067 28478.371 - 28597.527: 99.8566% ( 4) 00:08:57.067 28597.527 - 28716.684: 99.8868% ( 4) 00:08:57.067 28716.684 - 28835.840: 99.9170% ( 4) 00:08:57.067 28835.840 - 28954.996: 99.9547% ( 5) 00:08:57.067 28954.996 - 29074.153: 99.9849% ( 4) 00:08:57.067 29074.153 - 29193.309: 100.0000% ( 2) 00:08:57.067 00:08:57.067 14:14:16 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:57.067 00:08:57.067 real 0m2.680s 00:08:57.067 user 0m2.281s 00:08:57.067 sys 0m0.287s 00:08:57.067 ************************************ 00:08:57.067 END TEST nvme_perf 00:08:57.067 ************************************ 00:08:57.067 14:14:16 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.067 14:14:16 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:57.067 14:14:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:57.067 14:14:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:57.067 14:14:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.067 14:14:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.067 ************************************ 00:08:57.067 START TEST nvme_hello_world 00:08:57.067 ************************************ 00:08:57.067 14:14:16 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:57.326 Initializing NVMe Controllers 00:08:57.326 Attached to 0000:00:10.0 00:08:57.326 Namespace ID: 1 size: 6GB 00:08:57.326 Attached to 0000:00:11.0 00:08:57.326 Namespace ID: 1 size: 5GB 00:08:57.326 Attached to 0000:00:13.0 00:08:57.326 Namespace ID: 1 size: 1GB 00:08:57.326 Attached to 0000:00:12.0 00:08:57.326 Namespace ID: 1 size: 4GB 00:08:57.326 Namespace ID: 2 size: 4GB 00:08:57.326 Namespace ID: 3 size: 4GB 00:08:57.326 Initialization complete. 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 INFO: using host memory buffer for IO 00:08:57.326 Hello world! 00:08:57.326 ************************************ 00:08:57.326 END TEST nvme_hello_world 00:08:57.326 ************************************ 00:08:57.326 00:08:57.326 real 0m0.331s 00:08:57.326 user 0m0.140s 00:08:57.326 sys 0m0.144s 00:08:57.326 14:14:16 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.326 14:14:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:57.326 14:14:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:57.326 14:14:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.326 14:14:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.326 14:14:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.326 ************************************ 00:08:57.326 START TEST nvme_sgl 00:08:57.326 ************************************ 00:08:57.326 14:14:16 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:57.585 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:57.585 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:57.585 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:57.585 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:57.585 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:57.585 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:57.585 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:57.585 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:57.585 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:57.843 NVMe Readv/Writev Request test 00:08:57.843 Attached to 0000:00:10.0 00:08:57.843 Attached to 0000:00:11.0 00:08:57.843 Attached to 0000:00:13.0 00:08:57.843 Attached to 0000:00:12.0 00:08:57.843 0000:00:10.0: build_io_request_2 test passed 00:08:57.843 0000:00:10.0: build_io_request_4 test passed 00:08:57.843 0000:00:10.0: build_io_request_5 test passed 00:08:57.843 0000:00:10.0: build_io_request_6 test passed 00:08:57.843 0000:00:10.0: build_io_request_7 test passed 00:08:57.843 0000:00:10.0: build_io_request_10 test passed 00:08:57.843 0000:00:11.0: build_io_request_2 test passed 00:08:57.843 0000:00:11.0: build_io_request_4 test passed 00:08:57.843 0000:00:11.0: build_io_request_5 test passed 00:08:57.843 0000:00:11.0: build_io_request_6 test passed 00:08:57.843 0000:00:11.0: build_io_request_7 test passed 00:08:57.843 0000:00:11.0: build_io_request_10 test passed 00:08:57.843 Cleaning up... 00:08:57.843 00:08:57.843 real 0m0.403s 00:08:57.843 user 0m0.221s 00:08:57.843 sys 0m0.131s 00:08:57.843 ************************************ 00:08:57.843 END TEST nvme_sgl 00:08:57.843 ************************************ 00:08:57.843 14:14:17 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.843 14:14:17 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:57.843 14:14:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:57.843 14:14:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.843 14:14:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.843 14:14:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.843 ************************************ 00:08:57.843 START TEST nvme_e2edp 00:08:57.843 ************************************ 00:08:57.843 14:14:17 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:58.102 NVMe Write/Read with End-to-End data protection test 00:08:58.102 Attached to 0000:00:10.0 00:08:58.102 Attached to 0000:00:11.0 00:08:58.102 Attached to 0000:00:13.0 00:08:58.102 Attached to 0000:00:12.0 00:08:58.102 Cleaning up... 00:08:58.102 00:08:58.102 real 0m0.263s 00:08:58.102 user 0m0.096s 00:08:58.102 sys 0m0.121s 00:08:58.102 ************************************ 00:08:58.102 END TEST nvme_e2edp 00:08:58.102 ************************************ 00:08:58.102 14:14:17 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.102 14:14:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:58.102 14:14:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:58.102 14:14:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.102 14:14:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.102 14:14:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.102 ************************************ 00:08:58.102 START TEST nvme_reserve 00:08:58.102 ************************************ 00:08:58.102 14:14:17 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:58.361 ===================================================== 00:08:58.361 NVMe Controller at PCI bus 0, device 16, function 0 00:08:58.361 ===================================================== 00:08:58.361 Reservations: Not Supported 00:08:58.361 ===================================================== 00:08:58.361 NVMe Controller at PCI bus 0, device 17, function 0 00:08:58.361 ===================================================== 00:08:58.361 Reservations: Not Supported 00:08:58.361 ===================================================== 00:08:58.361 NVMe Controller at PCI bus 0, device 19, function 0 00:08:58.361 ===================================================== 00:08:58.361 Reservations: Not Supported 00:08:58.361 ===================================================== 00:08:58.361 NVMe Controller at PCI bus 0, device 18, function 0 00:08:58.361 ===================================================== 00:08:58.361 Reservations: Not Supported 00:08:58.361 Reservation test passed 00:08:58.361 00:08:58.361 real 0m0.286s 00:08:58.361 user 0m0.104s 00:08:58.361 sys 0m0.138s 00:08:58.361 ************************************ 00:08:58.361 END TEST nvme_reserve 00:08:58.361 ************************************ 00:08:58.361 14:14:18 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.361 14:14:18 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:58.361 14:14:18 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:58.361 14:14:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.361 14:14:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.361 14:14:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.361 ************************************ 00:08:58.361 START TEST nvme_err_injection 00:08:58.361 ************************************ 00:08:58.361 14:14:18 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:58.619 NVMe Error Injection test 00:08:58.619 Attached to 0000:00:10.0 00:08:58.619 Attached to 0000:00:11.0 00:08:58.619 Attached to 0000:00:13.0 00:08:58.619 Attached to 0000:00:12.0 00:08:58.619 0000:00:10.0: get features failed as expected 00:08:58.619 0000:00:11.0: get features failed as expected 00:08:58.619 0000:00:13.0: get features failed as expected 00:08:58.619 0000:00:12.0: get features failed as expected 00:08:58.619 0000:00:12.0: get features successfully as expected 00:08:58.619 0000:00:10.0: get features successfully as expected 00:08:58.619 0000:00:11.0: get features successfully as expected 00:08:58.619 0000:00:13.0: get features successfully as expected 00:08:58.619 0000:00:10.0: read failed as expected 00:08:58.619 0000:00:11.0: read failed as expected 00:08:58.619 0000:00:13.0: read failed as expected 00:08:58.619 0000:00:12.0: read failed as expected 00:08:58.619 0000:00:10.0: read successfully as expected 00:08:58.619 0000:00:11.0: read successfully as expected 00:08:58.620 0000:00:13.0: read successfully as expected 00:08:58.620 0000:00:12.0: read successfully as expected 00:08:58.620 Cleaning up... 00:08:58.620 00:08:58.620 real 0m0.296s 00:08:58.620 user 0m0.120s 00:08:58.620 sys 0m0.133s 00:08:58.620 ************************************ 00:08:58.620 END TEST nvme_err_injection 00:08:58.620 ************************************ 00:08:58.620 14:14:18 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.620 14:14:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:58.879 14:14:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:58.879 14:14:18 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:58.879 14:14:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.879 14:14:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.879 ************************************ 00:08:58.879 START TEST nvme_overhead 00:08:58.879 ************************************ 00:08:58.879 14:14:18 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:00.259 Initializing NVMe Controllers 00:09:00.259 Attached to 0000:00:10.0 00:09:00.259 Attached to 0000:00:11.0 00:09:00.259 Attached to 0000:00:13.0 00:09:00.259 Attached to 0000:00:12.0 00:09:00.259 Initialization complete. Launching workers. 00:09:00.259 submit (in ns) avg, min, max = 16453.2, 12906.4, 90409.5 00:09:00.259 complete (in ns) avg, min, max = 11913.1, 8658.2, 101827.7 00:09:00.259 00:09:00.259 Submit histogram 00:09:00.259 ================ 00:09:00.259 Range in us Cumulative Count 00:09:00.259 12.858 - 12.916: 0.0118% ( 1) 00:09:00.259 12.916 - 12.975: 0.0236% ( 1) 00:09:00.259 12.975 - 13.033: 0.0354% ( 1) 00:09:00.259 13.091 - 13.149: 0.0472% ( 1) 00:09:00.259 13.149 - 13.207: 0.2007% ( 13) 00:09:00.259 13.207 - 13.265: 0.9797% ( 66) 00:09:00.259 13.265 - 13.324: 2.7384% ( 149) 00:09:00.259 13.324 - 13.382: 5.0401% ( 195) 00:09:00.259 13.382 - 13.440: 7.6369% ( 220) 00:09:00.259 13.440 - 13.498: 9.7144% ( 176) 00:09:00.259 13.498 - 13.556: 11.6383% ( 163) 00:09:00.259 13.556 - 13.615: 14.3178% ( 227) 00:09:00.259 13.615 - 13.673: 18.8621% ( 385) 00:09:00.259 13.673 - 13.731: 25.6964% ( 579) 00:09:00.259 13.731 - 13.789: 31.9523% ( 530) 00:09:00.259 13.789 - 13.847: 36.5203% ( 387) 00:09:00.259 13.847 - 13.905: 39.0227% ( 212) 00:09:00.259 13.905 - 13.964: 40.3447% ( 112) 00:09:00.259 13.964 - 14.022: 41.6076% ( 107) 00:09:00.259 14.022 - 14.080: 43.1657% ( 132) 00:09:00.259 14.080 - 14.138: 45.0189% ( 157) 00:09:00.259 14.138 - 14.196: 46.8248% ( 153) 00:09:00.259 14.196 - 14.255: 48.3357% ( 128) 00:09:00.259 14.255 - 14.313: 49.3272% ( 84) 00:09:00.259 14.313 - 14.371: 50.1534% ( 70) 00:09:00.259 14.371 - 14.429: 51.5227% ( 116) 00:09:00.259 14.429 - 14.487: 53.4230% ( 161) 00:09:00.259 14.487 - 14.545: 55.3116% ( 160) 00:09:00.259 14.545 - 14.604: 56.7635% ( 123) 00:09:00.259 14.604 - 14.662: 58.1681% ( 119) 00:09:00.259 14.662 - 14.720: 58.9825% ( 69) 00:09:00.259 14.720 - 14.778: 59.5255% ( 46) 00:09:00.259 14.778 - 14.836: 60.0803% ( 47) 00:09:00.259 14.836 - 14.895: 60.6350% ( 47) 00:09:00.259 14.895 - 15.011: 61.6501% ( 86) 00:09:00.259 15.011 - 15.127: 62.2049% ( 47) 00:09:00.259 15.127 - 15.244: 62.6771% ( 40) 00:09:00.259 15.244 - 15.360: 63.0312% ( 30) 00:09:00.259 15.360 - 15.476: 63.2908% ( 22) 00:09:00.259 15.476 - 15.593: 63.4797% ( 16) 00:09:00.259 15.593 - 15.709: 63.5387% ( 5) 00:09:00.259 15.709 - 15.825: 63.5977% ( 5) 00:09:00.259 15.825 - 15.942: 63.6568% ( 5) 00:09:00.259 15.942 - 16.058: 63.6922% ( 3) 00:09:00.259 16.058 - 16.175: 63.7748% ( 7) 00:09:00.259 16.175 - 16.291: 63.8220% ( 4) 00:09:00.259 16.291 - 16.407: 63.8456% ( 2) 00:09:00.259 16.407 - 16.524: 63.8810% ( 3) 00:09:00.259 16.524 - 16.640: 63.8928% ( 1) 00:09:00.259 16.640 - 16.756: 63.9164% ( 2) 00:09:00.259 16.756 - 16.873: 63.9636% ( 4) 00:09:00.259 16.873 - 16.989: 64.0345% ( 6) 00:09:00.259 16.989 - 17.105: 64.0699% ( 3) 00:09:00.259 17.105 - 17.222: 66.1945% ( 180) 00:09:00.259 17.222 - 17.338: 72.7573% ( 556) 00:09:00.259 17.338 - 17.455: 77.7030% ( 419) 00:09:00.259 17.455 - 17.571: 79.7686% ( 175) 00:09:00.259 17.571 - 17.687: 80.7365% ( 82) 00:09:00.259 17.687 - 17.804: 81.7753% ( 88) 00:09:00.259 17.804 - 17.920: 82.9202% ( 97) 00:09:00.259 17.920 - 18.036: 83.8527% ( 79) 00:09:00.259 18.036 - 18.153: 84.4311% ( 49) 00:09:00.259 18.153 - 18.269: 84.7852% ( 30) 00:09:00.259 18.269 - 18.385: 85.0803% ( 25) 00:09:00.259 18.385 - 18.502: 85.3636% ( 24) 00:09:00.259 18.502 - 18.618: 85.5878% ( 19) 00:09:00.259 18.618 - 18.735: 85.9065% ( 27) 00:09:00.259 18.735 - 18.851: 86.2016% ( 25) 00:09:00.259 18.851 - 18.967: 86.4613% ( 22) 00:09:00.259 18.967 - 19.084: 86.6974% ( 20) 00:09:00.259 19.084 - 19.200: 86.8980% ( 17) 00:09:00.259 19.200 - 19.316: 87.0751% ( 15) 00:09:00.259 19.316 - 19.433: 87.3229% ( 21) 00:09:00.259 19.433 - 19.549: 87.4882% ( 14) 00:09:00.259 19.549 - 19.665: 87.6534% ( 14) 00:09:00.259 19.665 - 19.782: 87.7833% ( 11) 00:09:00.259 19.782 - 19.898: 87.9367% ( 13) 00:09:00.259 19.898 - 20.015: 88.0784% ( 12) 00:09:00.259 20.015 - 20.131: 88.2554% ( 15) 00:09:00.259 20.131 - 20.247: 88.4089% ( 13) 00:09:00.259 20.247 - 20.364: 88.5269% ( 10) 00:09:00.259 20.364 - 20.480: 88.6568% ( 11) 00:09:00.259 20.480 - 20.596: 88.8456% ( 16) 00:09:00.259 20.596 - 20.713: 88.9873% ( 12) 00:09:00.259 20.713 - 20.829: 89.0699% ( 7) 00:09:00.259 20.829 - 20.945: 89.1407% ( 6) 00:09:00.259 20.945 - 21.062: 89.3178% ( 15) 00:09:00.259 21.062 - 21.178: 89.4712% ( 13) 00:09:00.259 21.178 - 21.295: 89.5538% ( 7) 00:09:00.259 21.295 - 21.411: 89.6483% ( 8) 00:09:00.259 21.411 - 21.527: 89.7309% ( 7) 00:09:00.259 21.527 - 21.644: 89.8017% ( 6) 00:09:00.259 21.644 - 21.760: 89.8607% ( 5) 00:09:00.259 21.760 - 21.876: 89.9551% ( 8) 00:09:00.259 21.876 - 21.993: 90.0260% ( 6) 00:09:00.259 21.993 - 22.109: 90.1558% ( 11) 00:09:00.259 22.109 - 22.225: 90.2266% ( 6) 00:09:00.259 22.225 - 22.342: 90.2738% ( 4) 00:09:00.259 22.342 - 22.458: 90.2975% ( 2) 00:09:00.259 22.458 - 22.575: 90.3683% ( 6) 00:09:00.259 22.575 - 22.691: 90.4273% ( 5) 00:09:00.259 22.691 - 22.807: 90.4863% ( 5) 00:09:00.259 22.807 - 22.924: 90.5689% ( 7) 00:09:00.259 22.924 - 23.040: 90.6043% ( 3) 00:09:00.259 23.040 - 23.156: 90.7460% ( 12) 00:09:00.259 23.156 - 23.273: 90.8640% ( 10) 00:09:00.259 23.273 - 23.389: 90.8994% ( 3) 00:09:00.259 23.389 - 23.505: 90.9939% ( 8) 00:09:00.259 23.505 - 23.622: 91.1001% ( 9) 00:09:00.259 23.622 - 23.738: 91.1473% ( 4) 00:09:00.259 23.738 - 23.855: 91.2299% ( 7) 00:09:00.259 23.855 - 23.971: 91.2535% ( 2) 00:09:00.259 23.971 - 24.087: 91.2890% ( 3) 00:09:00.259 24.087 - 24.204: 91.3126% ( 2) 00:09:00.259 24.204 - 24.320: 91.3716% ( 5) 00:09:00.259 24.320 - 24.436: 91.4306% ( 5) 00:09:00.259 24.436 - 24.553: 91.4660% ( 3) 00:09:00.259 24.553 - 24.669: 91.5250% ( 5) 00:09:00.259 24.669 - 24.785: 91.5840% ( 5) 00:09:00.259 24.785 - 24.902: 91.6195% ( 3) 00:09:00.259 24.902 - 25.018: 91.6313% ( 1) 00:09:00.259 25.018 - 25.135: 91.7139% ( 7) 00:09:00.259 25.135 - 25.251: 91.7257% ( 1) 00:09:00.259 25.251 - 25.367: 91.7493% ( 2) 00:09:00.259 25.367 - 25.484: 91.7611% ( 1) 00:09:00.259 25.484 - 25.600: 91.8083% ( 4) 00:09:00.259 25.600 - 25.716: 91.8319% ( 2) 00:09:00.259 25.716 - 25.833: 91.8791% ( 4) 00:09:00.259 25.833 - 25.949: 91.9145% ( 3) 00:09:00.259 25.949 - 26.065: 91.9263% ( 1) 00:09:00.259 26.065 - 26.182: 91.9618% ( 3) 00:09:00.259 26.182 - 26.298: 91.9972% ( 3) 00:09:00.259 26.298 - 26.415: 92.0326% ( 3) 00:09:00.259 26.415 - 26.531: 92.0444% ( 1) 00:09:00.259 26.531 - 26.647: 92.1152% ( 6) 00:09:00.259 26.647 - 26.764: 92.1506% ( 3) 00:09:00.259 26.764 - 26.880: 92.2214% ( 6) 00:09:00.259 26.880 - 26.996: 92.2332% ( 1) 00:09:00.259 26.996 - 27.113: 92.2568% ( 2) 00:09:00.259 27.113 - 27.229: 92.2805% ( 2) 00:09:00.259 27.229 - 27.345: 92.3041% ( 2) 00:09:00.259 27.345 - 27.462: 92.3159% ( 1) 00:09:00.259 27.462 - 27.578: 92.3513% ( 3) 00:09:00.259 27.578 - 27.695: 92.4221% ( 6) 00:09:00.259 27.695 - 27.811: 92.4575% ( 3) 00:09:00.259 27.811 - 27.927: 92.6228% ( 14) 00:09:00.259 27.927 - 28.044: 92.9415% ( 27) 00:09:00.259 28.044 - 28.160: 93.3310% ( 33) 00:09:00.259 28.160 - 28.276: 94.0746% ( 63) 00:09:00.259 28.276 - 28.393: 94.8654% ( 67) 00:09:00.259 28.393 - 28.509: 95.6799% ( 69) 00:09:00.259 28.509 - 28.625: 96.2701% ( 50) 00:09:00.259 28.625 - 28.742: 96.9783% ( 60) 00:09:00.259 28.742 - 28.858: 97.3914% ( 35) 00:09:00.259 28.858 - 28.975: 97.6393% ( 21) 00:09:00.259 28.975 - 29.091: 97.8045% ( 14) 00:09:00.259 29.091 - 29.207: 98.0052% ( 17) 00:09:00.259 29.207 - 29.324: 98.1586% ( 13) 00:09:00.259 29.324 - 29.440: 98.3121% ( 13) 00:09:00.259 29.440 - 29.556: 98.3947% ( 7) 00:09:00.260 29.556 - 29.673: 98.4183% ( 2) 00:09:00.260 29.673 - 29.789: 98.4891% ( 6) 00:09:00.260 29.789 - 30.022: 98.5954% ( 9) 00:09:00.260 30.022 - 30.255: 98.6426% ( 4) 00:09:00.260 30.255 - 30.487: 98.6662% ( 2) 00:09:00.260 30.487 - 30.720: 98.7252% ( 5) 00:09:00.260 30.720 - 30.953: 98.7842% ( 5) 00:09:00.260 30.953 - 31.185: 98.8078% ( 2) 00:09:00.260 31.185 - 31.418: 98.8432% ( 3) 00:09:00.260 31.418 - 31.651: 98.9023% ( 5) 00:09:00.260 31.651 - 31.884: 98.9613% ( 5) 00:09:00.260 31.884 - 32.116: 98.9967% ( 3) 00:09:00.260 32.349 - 32.582: 99.0321% ( 3) 00:09:00.260 32.582 - 32.815: 99.0557% ( 2) 00:09:00.260 32.815 - 33.047: 99.0911% ( 3) 00:09:00.260 33.047 - 33.280: 99.1147% ( 2) 00:09:00.260 33.280 - 33.513: 99.1383% ( 2) 00:09:00.260 33.513 - 33.745: 99.2092% ( 6) 00:09:00.260 33.745 - 33.978: 99.2682% ( 5) 00:09:00.260 33.978 - 34.211: 99.2918% ( 2) 00:09:00.260 34.211 - 34.444: 99.3862% ( 8) 00:09:00.260 34.444 - 34.676: 99.4452% ( 5) 00:09:00.260 34.676 - 34.909: 99.4806% ( 3) 00:09:00.260 34.909 - 35.142: 99.5042% ( 2) 00:09:00.260 35.142 - 35.375: 99.5279% ( 2) 00:09:00.260 35.607 - 35.840: 99.5751% ( 4) 00:09:00.260 35.840 - 36.073: 99.5869% ( 1) 00:09:00.260 36.073 - 36.305: 99.5987% ( 1) 00:09:00.260 36.305 - 36.538: 99.6105% ( 1) 00:09:00.260 36.538 - 36.771: 99.6341% ( 2) 00:09:00.260 36.771 - 37.004: 99.6577% ( 2) 00:09:00.260 37.004 - 37.236: 99.6695% ( 1) 00:09:00.260 37.469 - 37.702: 99.6813% ( 1) 00:09:00.260 37.702 - 37.935: 99.7049% ( 2) 00:09:00.260 38.167 - 38.400: 99.7167% ( 1) 00:09:00.260 38.400 - 38.633: 99.7403% ( 2) 00:09:00.260 41.425 - 41.658: 99.7521% ( 1) 00:09:00.260 41.658 - 41.891: 99.7639% ( 1) 00:09:00.260 42.822 - 43.055: 99.7757% ( 1) 00:09:00.260 43.287 - 43.520: 99.8111% ( 3) 00:09:00.260 43.520 - 43.753: 99.8229% ( 1) 00:09:00.260 43.753 - 43.985: 99.8466% ( 2) 00:09:00.260 43.985 - 44.218: 99.8702% ( 2) 00:09:00.260 44.451 - 44.684: 99.8820% ( 1) 00:09:00.260 44.916 - 45.149: 99.8938% ( 1) 00:09:00.260 45.149 - 45.382: 99.9056% ( 1) 00:09:00.260 45.615 - 45.847: 99.9174% ( 1) 00:09:00.260 45.847 - 46.080: 99.9292% ( 1) 00:09:00.260 47.942 - 48.175: 99.9410% ( 1) 00:09:00.260 49.105 - 49.338: 99.9528% ( 1) 00:09:00.260 51.898 - 52.131: 99.9646% ( 1) 00:09:00.260 60.044 - 60.509: 99.9764% ( 1) 00:09:00.260 88.436 - 88.902: 99.9882% ( 1) 00:09:00.260 90.298 - 90.764: 100.0000% ( 1) 00:09:00.260 00:09:00.260 Complete histogram 00:09:00.260 ================== 00:09:00.260 Range in us Cumulative Count 00:09:00.260 8.611 - 8.669: 0.0118% ( 1) 00:09:00.260 8.669 - 8.727: 0.1416% ( 11) 00:09:00.260 8.727 - 8.785: 0.4839% ( 29) 00:09:00.260 8.785 - 8.844: 1.2394% ( 64) 00:09:00.260 8.844 - 8.902: 2.3961% ( 98) 00:09:00.260 8.902 - 8.960: 3.1634% ( 65) 00:09:00.260 8.960 - 9.018: 4.3909% ( 104) 00:09:00.260 9.018 - 9.076: 7.1412% ( 233) 00:09:00.260 9.076 - 9.135: 11.7800% ( 393) 00:09:00.260 9.135 - 9.193: 18.2247% ( 546) 00:09:00.260 9.193 - 9.251: 22.8281% ( 390) 00:09:00.260 9.251 - 9.309: 26.1213% ( 279) 00:09:00.260 9.309 - 9.367: 29.1076% ( 253) 00:09:00.260 9.367 - 9.425: 32.9674% ( 327) 00:09:00.260 9.425 - 9.484: 36.7092% ( 317) 00:09:00.260 9.484 - 9.542: 39.4948% ( 236) 00:09:00.260 9.542 - 9.600: 41.1827% ( 143) 00:09:00.260 9.600 - 9.658: 42.6582% ( 125) 00:09:00.260 9.658 - 9.716: 45.0189% ( 200) 00:09:00.260 9.716 - 9.775: 47.5212% ( 212) 00:09:00.260 9.775 - 9.833: 49.8111% ( 194) 00:09:00.260 9.833 - 9.891: 51.3810% ( 133) 00:09:00.260 9.891 - 9.949: 52.5260% ( 97) 00:09:00.260 9.949 - 10.007: 53.5057% ( 83) 00:09:00.260 10.007 - 10.065: 54.4854% ( 83) 00:09:00.260 10.065 - 10.124: 55.5477% ( 90) 00:09:00.260 10.124 - 10.182: 56.3857% ( 71) 00:09:00.260 10.182 - 10.240: 57.1294% ( 63) 00:09:00.260 10.240 - 10.298: 57.7668% ( 54) 00:09:00.260 10.298 - 10.356: 58.2035% ( 37) 00:09:00.260 10.356 - 10.415: 58.6284% ( 36) 00:09:00.260 10.415 - 10.473: 58.7701% ( 12) 00:09:00.260 10.473 - 10.531: 59.0179% ( 21) 00:09:00.260 10.531 - 10.589: 59.2894% ( 23) 00:09:00.260 10.589 - 10.647: 59.6081% ( 27) 00:09:00.260 10.647 - 10.705: 59.9504% ( 29) 00:09:00.260 10.705 - 10.764: 60.1275% ( 15) 00:09:00.260 10.764 - 10.822: 60.2927% ( 14) 00:09:00.260 10.822 - 10.880: 60.4344% ( 12) 00:09:00.260 10.880 - 10.938: 60.5760% ( 12) 00:09:00.260 10.938 - 10.996: 60.6586% ( 7) 00:09:00.260 10.996 - 11.055: 60.7767% ( 10) 00:09:00.260 11.055 - 11.113: 60.8711% ( 8) 00:09:00.260 11.113 - 11.171: 60.9773% ( 9) 00:09:00.260 11.171 - 11.229: 61.1308% ( 13) 00:09:00.260 11.229 - 11.287: 61.3314% ( 17) 00:09:00.260 11.287 - 11.345: 61.4141% ( 7) 00:09:00.260 11.345 - 11.404: 61.5793% ( 14) 00:09:00.260 11.404 - 11.462: 61.6974% ( 10) 00:09:00.260 11.462 - 11.520: 61.8036% ( 9) 00:09:00.260 11.520 - 11.578: 62.0633% ( 22) 00:09:00.260 11.578 - 11.636: 63.0194% ( 81) 00:09:00.260 11.636 - 11.695: 65.2738% ( 191) 00:09:00.260 11.695 - 11.753: 69.2635% ( 338) 00:09:00.260 11.753 - 11.811: 73.3829% ( 349) 00:09:00.260 11.811 - 11.869: 76.9240% ( 300) 00:09:00.260 11.869 - 11.927: 79.1903% ( 192) 00:09:00.260 11.927 - 11.985: 80.3824% ( 101) 00:09:00.260 11.985 - 12.044: 80.9726% ( 50) 00:09:00.260 12.044 - 12.102: 81.2677% ( 25) 00:09:00.260 12.102 - 12.160: 81.3975% ( 11) 00:09:00.260 12.160 - 12.218: 81.5392% ( 12) 00:09:00.260 12.218 - 12.276: 81.6690% ( 11) 00:09:00.260 12.276 - 12.335: 81.8579% ( 16) 00:09:00.260 12.335 - 12.393: 82.0231% ( 14) 00:09:00.260 12.393 - 12.451: 82.2828% ( 22) 00:09:00.260 12.451 - 12.509: 82.4481% ( 14) 00:09:00.260 12.509 - 12.567: 82.7195% ( 23) 00:09:00.260 12.567 - 12.625: 82.8376% ( 10) 00:09:00.260 12.625 - 12.684: 82.9792% ( 12) 00:09:00.260 12.684 - 12.742: 83.3215% ( 29) 00:09:00.260 12.742 - 12.800: 83.5812% ( 22) 00:09:00.260 12.800 - 12.858: 84.0179% ( 37) 00:09:00.260 12.858 - 12.916: 84.3012% ( 24) 00:09:00.260 12.916 - 12.975: 84.6553% ( 30) 00:09:00.260 12.975 - 13.033: 84.8678% ( 18) 00:09:00.260 13.033 - 13.091: 85.1393% ( 23) 00:09:00.260 13.091 - 13.149: 85.2927% ( 13) 00:09:00.260 13.149 - 13.207: 85.4108% ( 10) 00:09:00.260 13.207 - 13.265: 85.4580% ( 4) 00:09:00.260 13.265 - 13.324: 85.5288% ( 6) 00:09:00.260 13.324 - 13.382: 85.5878% ( 5) 00:09:00.260 13.382 - 13.440: 85.6114% ( 2) 00:09:00.260 13.440 - 13.498: 85.6822% ( 6) 00:09:00.260 13.498 - 13.556: 85.7531% ( 6) 00:09:00.260 13.556 - 13.615: 85.7885% ( 3) 00:09:00.260 13.615 - 13.673: 85.8121% ( 2) 00:09:00.260 13.673 - 13.731: 85.8475% ( 3) 00:09:00.260 13.731 - 13.789: 85.9065% ( 5) 00:09:00.260 13.789 - 13.847: 85.9419% ( 3) 00:09:00.260 13.847 - 13.905: 85.9773% ( 3) 00:09:00.260 13.905 - 13.964: 86.0364% ( 5) 00:09:00.260 13.964 - 14.022: 86.0718% ( 3) 00:09:00.260 14.022 - 14.080: 86.1072% ( 3) 00:09:00.260 14.080 - 14.138: 86.1544% ( 4) 00:09:00.260 14.138 - 14.196: 86.1662% ( 1) 00:09:00.260 14.196 - 14.255: 86.2252% ( 5) 00:09:00.260 14.255 - 14.313: 86.3905% ( 14) 00:09:00.260 14.313 - 14.371: 86.4731% ( 7) 00:09:00.260 14.371 - 14.429: 86.5321% ( 5) 00:09:00.260 14.429 - 14.487: 86.6383% ( 9) 00:09:00.260 14.487 - 14.545: 86.7446% ( 9) 00:09:00.260 14.545 - 14.604: 86.8272% ( 7) 00:09:00.260 14.604 - 14.662: 86.9570% ( 11) 00:09:00.260 14.662 - 14.720: 87.0515% ( 8) 00:09:00.260 14.720 - 14.778: 87.1223% ( 6) 00:09:00.260 14.778 - 14.836: 87.1695% ( 4) 00:09:00.260 14.836 - 14.895: 87.2875% ( 10) 00:09:00.260 14.895 - 15.011: 87.4410% ( 13) 00:09:00.260 15.011 - 15.127: 87.5826% ( 12) 00:09:00.260 15.127 - 15.244: 87.6889% ( 9) 00:09:00.260 15.244 - 15.360: 87.8187% ( 11) 00:09:00.260 15.360 - 15.476: 87.8777% ( 5) 00:09:00.260 15.476 - 15.593: 87.9485% ( 6) 00:09:00.260 15.593 - 15.709: 87.9721% ( 2) 00:09:00.260 15.709 - 15.825: 88.0548% ( 7) 00:09:00.260 15.825 - 15.942: 88.2082% ( 13) 00:09:00.260 15.942 - 16.058: 88.2790% ( 6) 00:09:00.260 16.058 - 16.175: 88.4089% ( 11) 00:09:00.260 16.175 - 16.291: 88.5505% ( 12) 00:09:00.260 16.291 - 16.407: 88.6331% ( 7) 00:09:00.260 16.407 - 16.524: 88.6922% ( 5) 00:09:00.260 16.524 - 16.640: 88.7748% ( 7) 00:09:00.260 16.640 - 16.756: 88.8810% ( 9) 00:09:00.260 16.756 - 16.873: 88.9873% ( 9) 00:09:00.260 16.873 - 16.989: 89.0581% ( 6) 00:09:00.261 16.989 - 17.105: 89.1525% ( 8) 00:09:00.261 17.105 - 17.222: 89.1879% ( 3) 00:09:00.261 17.222 - 17.338: 89.2587% ( 6) 00:09:00.261 17.338 - 17.455: 89.2823% ( 2) 00:09:00.261 17.455 - 17.571: 89.2941% ( 1) 00:09:00.261 17.571 - 17.687: 89.3296% ( 3) 00:09:00.261 17.687 - 17.804: 89.3532% ( 2) 00:09:00.261 17.804 - 17.920: 89.4240% ( 6) 00:09:00.261 17.920 - 18.036: 89.4476% ( 2) 00:09:00.261 18.036 - 18.153: 89.4948% ( 4) 00:09:00.261 18.153 - 18.269: 89.5184% ( 2) 00:09:00.261 18.269 - 18.385: 89.5302% ( 1) 00:09:00.261 18.385 - 18.502: 89.5538% ( 2) 00:09:00.261 18.502 - 18.618: 89.6128% ( 5) 00:09:00.261 18.618 - 18.735: 89.6483% ( 3) 00:09:00.261 18.735 - 18.851: 89.6837% ( 3) 00:09:00.261 18.851 - 18.967: 89.7309% ( 4) 00:09:00.261 18.967 - 19.084: 89.7663% ( 3) 00:09:00.261 19.084 - 19.200: 89.7899% ( 2) 00:09:00.261 19.200 - 19.316: 89.8017% ( 1) 00:09:00.261 19.665 - 19.782: 89.8607% ( 5) 00:09:00.261 19.782 - 19.898: 89.8725% ( 1) 00:09:00.261 19.898 - 20.015: 89.8961% ( 2) 00:09:00.261 20.015 - 20.131: 89.9551% ( 5) 00:09:00.261 20.131 - 20.247: 89.9788% ( 2) 00:09:00.261 20.247 - 20.364: 90.0024% ( 2) 00:09:00.261 20.364 - 20.480: 90.0378% ( 3) 00:09:00.261 20.480 - 20.596: 90.1440% ( 9) 00:09:00.261 20.596 - 20.713: 90.2030% ( 5) 00:09:00.261 20.713 - 20.829: 90.2148% ( 1) 00:09:00.261 20.829 - 20.945: 90.2384% ( 2) 00:09:00.261 21.062 - 21.178: 90.2502% ( 1) 00:09:00.261 21.178 - 21.295: 90.2975% ( 4) 00:09:00.261 21.527 - 21.644: 90.3565% ( 5) 00:09:00.261 21.760 - 21.876: 90.3683% ( 1) 00:09:00.261 21.876 - 21.993: 90.3801% ( 1) 00:09:00.261 21.993 - 22.109: 90.4037% ( 2) 00:09:00.261 22.109 - 22.225: 90.4155% ( 1) 00:09:00.261 22.225 - 22.342: 90.4391% ( 2) 00:09:00.261 22.342 - 22.458: 90.4509% ( 1) 00:09:00.261 22.458 - 22.575: 90.4627% ( 1) 00:09:00.261 22.691 - 22.807: 90.4745% ( 1) 00:09:00.261 22.807 - 22.924: 90.4863% ( 1) 00:09:00.261 22.924 - 23.040: 90.4981% ( 1) 00:09:00.261 23.040 - 23.156: 90.5217% ( 2) 00:09:00.261 23.156 - 23.273: 90.6516% ( 11) 00:09:00.261 23.273 - 23.389: 90.8286% ( 15) 00:09:00.261 23.389 - 23.505: 91.3362% ( 43) 00:09:00.261 23.505 - 23.622: 91.9500% ( 52) 00:09:00.261 23.622 - 23.738: 92.8352% ( 75) 00:09:00.261 23.738 - 23.855: 93.8975% ( 90) 00:09:00.261 23.855 - 23.971: 94.7120% ( 69) 00:09:00.261 23.971 - 24.087: 95.6563% ( 80) 00:09:00.261 24.087 - 24.204: 96.3881% ( 62) 00:09:00.261 24.204 - 24.320: 97.0609% ( 57) 00:09:00.261 24.320 - 24.436: 97.4622% ( 34) 00:09:00.261 24.436 - 24.553: 97.7101% ( 21) 00:09:00.261 24.553 - 24.669: 97.8872% ( 15) 00:09:00.261 24.669 - 24.785: 97.9934% ( 9) 00:09:00.261 24.785 - 24.902: 98.1586% ( 14) 00:09:00.261 24.902 - 25.018: 98.2649% ( 9) 00:09:00.261 25.018 - 25.135: 98.4065% ( 12) 00:09:00.261 25.135 - 25.251: 98.5009% ( 8) 00:09:00.261 25.251 - 25.367: 98.5364% ( 3) 00:09:00.261 25.367 - 25.484: 98.5954% ( 5) 00:09:00.261 25.484 - 25.600: 98.6190% ( 2) 00:09:00.261 25.600 - 25.716: 98.6308% ( 1) 00:09:00.261 25.716 - 25.833: 98.6780% ( 4) 00:09:00.261 25.833 - 25.949: 98.7252% ( 4) 00:09:00.261 25.949 - 26.065: 98.7724% ( 4) 00:09:00.261 26.065 - 26.182: 98.8078% ( 3) 00:09:00.261 26.298 - 26.415: 98.8196% ( 1) 00:09:00.261 26.415 - 26.531: 98.8432% ( 2) 00:09:00.261 26.531 - 26.647: 98.8551% ( 1) 00:09:00.261 26.764 - 26.880: 98.9023% ( 4) 00:09:00.261 26.880 - 26.996: 98.9141% ( 1) 00:09:00.261 27.113 - 27.229: 98.9259% ( 1) 00:09:00.261 27.229 - 27.345: 98.9849% ( 5) 00:09:00.261 27.345 - 27.462: 98.9967% ( 1) 00:09:00.261 27.695 - 27.811: 99.0085% ( 1) 00:09:00.261 27.811 - 27.927: 99.0203% ( 1) 00:09:00.261 28.044 - 28.160: 99.0439% ( 2) 00:09:00.261 28.160 - 28.276: 99.0675% ( 2) 00:09:00.261 28.393 - 28.509: 99.0911% ( 2) 00:09:00.261 28.625 - 28.742: 99.1383% ( 4) 00:09:00.261 28.742 - 28.858: 99.1501% ( 1) 00:09:00.261 28.975 - 29.091: 99.1856% ( 3) 00:09:00.261 29.091 - 29.207: 99.2328% ( 4) 00:09:00.261 29.207 - 29.324: 99.2446% ( 1) 00:09:00.261 29.324 - 29.440: 99.2564% ( 1) 00:09:00.261 29.440 - 29.556: 99.2918% ( 3) 00:09:00.261 29.556 - 29.673: 99.3154% ( 2) 00:09:00.261 29.673 - 29.789: 99.3272% ( 1) 00:09:00.261 29.789 - 30.022: 99.3744% ( 4) 00:09:00.261 30.022 - 30.255: 99.4098% ( 3) 00:09:00.261 30.255 - 30.487: 99.4570% ( 4) 00:09:00.261 30.487 - 30.720: 99.5042% ( 4) 00:09:00.261 30.720 - 30.953: 99.5751% ( 6) 00:09:00.261 30.953 - 31.185: 99.5869% ( 1) 00:09:00.261 31.185 - 31.418: 99.6105% ( 2) 00:09:00.261 31.418 - 31.651: 99.6223% ( 1) 00:09:00.261 31.651 - 31.884: 99.6459% ( 2) 00:09:00.261 31.884 - 32.116: 99.6695% ( 2) 00:09:00.261 32.116 - 32.349: 99.7049% ( 3) 00:09:00.261 32.349 - 32.582: 99.7167% ( 1) 00:09:00.261 32.582 - 32.815: 99.7403% ( 2) 00:09:00.261 33.047 - 33.280: 99.7521% ( 1) 00:09:00.261 33.280 - 33.513: 99.7639% ( 1) 00:09:00.261 35.375 - 35.607: 99.7875% ( 2) 00:09:00.261 37.702 - 37.935: 99.7993% ( 1) 00:09:00.261 38.633 - 38.865: 99.8111% ( 1) 00:09:00.261 39.098 - 39.331: 99.8229% ( 1) 00:09:00.261 39.331 - 39.564: 99.8347% ( 1) 00:09:00.261 40.262 - 40.495: 99.8584% ( 2) 00:09:00.261 40.960 - 41.193: 99.8702% ( 1) 00:09:00.261 41.425 - 41.658: 99.8820% ( 1) 00:09:00.261 41.658 - 41.891: 99.8938% ( 1) 00:09:00.261 42.589 - 42.822: 99.9056% ( 1) 00:09:00.261 43.055 - 43.287: 99.9174% ( 1) 00:09:00.261 46.080 - 46.313: 99.9292% ( 1) 00:09:00.261 46.313 - 46.545: 99.9410% ( 1) 00:09:00.261 47.244 - 47.476: 99.9528% ( 1) 00:09:00.261 47.709 - 47.942: 99.9764% ( 2) 00:09:00.261 62.371 - 62.836: 99.9882% ( 1) 00:09:00.261 101.469 - 101.935: 100.0000% ( 1) 00:09:00.261 00:09:00.261 00:09:00.261 real 0m1.291s 00:09:00.261 user 0m1.107s 00:09:00.261 sys 0m0.132s 00:09:00.261 14:14:19 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.261 14:14:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:00.261 ************************************ 00:09:00.261 END TEST nvme_overhead 00:09:00.261 ************************************ 00:09:00.261 14:14:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:00.261 14:14:19 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:00.261 14:14:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.261 14:14:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.261 ************************************ 00:09:00.261 START TEST nvme_arbitration 00:09:00.261 ************************************ 00:09:00.261 14:14:19 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:03.550 Initializing NVMe Controllers 00:09:03.550 Attached to 0000:00:10.0 00:09:03.550 Attached to 0000:00:11.0 00:09:03.550 Attached to 0000:00:13.0 00:09:03.550 Attached to 0000:00:12.0 00:09:03.550 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:03.550 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:03.550 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:03.550 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:03.550 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:03.550 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:03.550 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:03.550 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:03.550 Initialization complete. Launching workers. 00:09:03.550 Starting thread on core 1 with urgent priority queue 00:09:03.550 Starting thread on core 2 with urgent priority queue 00:09:03.550 Starting thread on core 3 with urgent priority queue 00:09:03.550 Starting thread on core 0 with urgent priority queue 00:09:03.550 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:03.550 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:09:03.550 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:09:03.550 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:09:03.550 QEMU NVMe Ctrl (12343 ) core 2: 661.33 IO/s 151.21 secs/100000 ios 00:09:03.550 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:09:03.550 ======================================================== 00:09:03.550 00:09:03.550 ************************************ 00:09:03.550 END TEST nvme_arbitration 00:09:03.550 ************************************ 00:09:03.550 00:09:03.550 real 0m3.358s 00:09:03.550 user 0m9.314s 00:09:03.550 sys 0m0.135s 00:09:03.550 14:14:23 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.550 14:14:23 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:03.550 14:14:23 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:03.550 14:14:23 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.550 14:14:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.550 14:14:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.550 ************************************ 00:09:03.550 START TEST nvme_single_aen 00:09:03.550 ************************************ 00:09:03.551 14:14:23 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:03.809 Asynchronous Event Request test 00:09:03.809 Attached to 0000:00:10.0 00:09:03.809 Attached to 0000:00:11.0 00:09:03.809 Attached to 0000:00:13.0 00:09:03.809 Attached to 0000:00:12.0 00:09:03.809 Reset controller to setup AER completions for this process 00:09:03.809 Registering asynchronous event callbacks... 00:09:03.809 Getting orig temperature thresholds of all controllers 00:09:03.809 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.809 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.809 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.809 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:03.809 Setting all controllers temperature threshold low to trigger AER 00:09:03.809 Waiting for all controllers temperature threshold to be set lower 00:09:03.809 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.809 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:03.809 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.809 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:03.809 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.809 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:03.809 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:03.809 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:03.809 Waiting for all controllers to trigger AER and reset threshold 00:09:03.809 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.809 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.809 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.809 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:03.809 Cleaning up... 00:09:03.809 ************************************ 00:09:03.809 END TEST nvme_single_aen 00:09:03.809 ************************************ 00:09:03.809 00:09:03.809 real 0m0.247s 00:09:03.809 user 0m0.092s 00:09:03.809 sys 0m0.108s 00:09:03.809 14:14:23 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.809 14:14:23 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:03.809 14:14:23 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:03.809 14:14:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.809 14:14:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.809 14:14:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.809 ************************************ 00:09:03.809 START TEST nvme_doorbell_aers 00:09:03.809 ************************************ 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:03.809 14:14:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:04.068 [2024-07-26 14:14:23.813255] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:14.044 Executing: test_write_invalid_db 00:09:14.044 Waiting for AER completion... 00:09:14.044 Failure: test_write_invalid_db 00:09:14.044 00:09:14.044 Executing: test_invalid_db_write_overflow_sq 00:09:14.044 Waiting for AER completion... 00:09:14.044 Failure: test_invalid_db_write_overflow_sq 00:09:14.044 00:09:14.044 Executing: test_invalid_db_write_overflow_cq 00:09:14.044 Waiting for AER completion... 00:09:14.044 Failure: test_invalid_db_write_overflow_cq 00:09:14.044 00:09:14.044 14:14:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:14.044 14:14:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:14.303 [2024-07-26 14:14:33.880372] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:24.281 Executing: test_write_invalid_db 00:09:24.281 Waiting for AER completion... 00:09:24.281 Failure: test_write_invalid_db 00:09:24.281 00:09:24.281 Executing: test_invalid_db_write_overflow_sq 00:09:24.281 Waiting for AER completion... 00:09:24.281 Failure: test_invalid_db_write_overflow_sq 00:09:24.281 00:09:24.281 Executing: test_invalid_db_write_overflow_cq 00:09:24.281 Waiting for AER completion... 00:09:24.282 Failure: test_invalid_db_write_overflow_cq 00:09:24.282 00:09:24.282 14:14:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:24.282 14:14:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:24.282 [2024-07-26 14:14:43.897671] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:34.260 Executing: test_write_invalid_db 00:09:34.260 Waiting for AER completion... 00:09:34.260 Failure: test_write_invalid_db 00:09:34.260 00:09:34.260 Executing: test_invalid_db_write_overflow_sq 00:09:34.260 Waiting for AER completion... 00:09:34.260 Failure: test_invalid_db_write_overflow_sq 00:09:34.260 00:09:34.260 Executing: test_invalid_db_write_overflow_cq 00:09:34.260 Waiting for AER completion... 00:09:34.260 Failure: test_invalid_db_write_overflow_cq 00:09:34.260 00:09:34.260 14:14:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:34.260 14:14:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:34.260 [2024-07-26 14:14:53.952158] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.236 Executing: test_write_invalid_db 00:09:44.236 Waiting for AER completion... 00:09:44.236 Failure: test_write_invalid_db 00:09:44.236 00:09:44.236 Executing: test_invalid_db_write_overflow_sq 00:09:44.236 Waiting for AER completion... 00:09:44.236 Failure: test_invalid_db_write_overflow_sq 00:09:44.236 00:09:44.236 Executing: test_invalid_db_write_overflow_cq 00:09:44.236 Waiting for AER completion... 00:09:44.236 Failure: test_invalid_db_write_overflow_cq 00:09:44.236 00:09:44.236 ************************************ 00:09:44.236 END TEST nvme_doorbell_aers 00:09:44.236 ************************************ 00:09:44.236 00:09:44.236 real 0m40.242s 00:09:44.236 user 0m34.053s 00:09:44.236 sys 0m5.809s 00:09:44.236 14:15:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.236 14:15:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:44.236 14:15:03 nvme -- nvme/nvme.sh@97 -- # uname 00:09:44.236 14:15:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:44.236 14:15:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:44.236 14:15:03 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:44.236 14:15:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.236 14:15:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.236 ************************************ 00:09:44.236 START TEST nvme_multi_aen 00:09:44.236 ************************************ 00:09:44.236 14:15:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:44.495 [2024-07-26 14:15:04.016459] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.016594] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.016617] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.018839] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.018958] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.018996] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.021038] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.021123] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.021172] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.023101] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.023210] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 [2024-07-26 14:15:04.023250] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68249) is not found. Dropping the request. 00:09:44.495 Child process pid: 68770 00:09:44.754 [Child] Asynchronous Event Request test 00:09:44.754 [Child] Attached to 0000:00:10.0 00:09:44.754 [Child] Attached to 0000:00:11.0 00:09:44.754 [Child] Attached to 0000:00:13.0 00:09:44.754 [Child] Attached to 0000:00:12.0 00:09:44.754 [Child] Registering asynchronous event callbacks... 00:09:44.754 [Child] Getting orig temperature thresholds of all controllers 00:09:44.754 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:44.754 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.754 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.754 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.754 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.754 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.754 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.754 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.754 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.754 [Child] Cleaning up... 00:09:44.754 Asynchronous Event Request test 00:09:44.754 Attached to 0000:00:10.0 00:09:44.754 Attached to 0000:00:11.0 00:09:44.754 Attached to 0000:00:13.0 00:09:44.754 Attached to 0000:00:12.0 00:09:44.754 Reset controller to setup AER completions for this process 00:09:44.754 Registering asynchronous event callbacks... 00:09:44.754 Getting orig temperature thresholds of all controllers 00:09:44.754 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:44.754 Setting all controllers temperature threshold low to trigger AER 00:09:44.754 Waiting for all controllers temperature threshold to be set lower 00:09:44.754 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.754 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:44.755 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.755 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:44.755 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.755 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:44.755 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:44.755 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:44.755 Waiting for all controllers to trigger AER and reset threshold 00:09:44.755 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.755 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.755 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.755 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:44.755 Cleaning up... 00:09:44.755 00:09:44.755 real 0m0.580s 00:09:44.755 user 0m0.208s 00:09:44.755 sys 0m0.271s 00:09:44.755 14:15:04 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.755 14:15:04 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:44.755 ************************************ 00:09:44.755 END TEST nvme_multi_aen 00:09:44.755 ************************************ 00:09:44.755 14:15:04 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:44.755 14:15:04 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:44.755 14:15:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.755 14:15:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.755 ************************************ 00:09:44.755 START TEST nvme_startup 00:09:44.755 ************************************ 00:09:44.755 14:15:04 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:45.014 Initializing NVMe Controllers 00:09:45.014 Attached to 0000:00:10.0 00:09:45.014 Attached to 0000:00:11.0 00:09:45.014 Attached to 0000:00:13.0 00:09:45.014 Attached to 0000:00:12.0 00:09:45.014 Initialization complete. 00:09:45.014 Time used:184225.562 (us). 00:09:45.014 00:09:45.014 real 0m0.276s 00:09:45.014 user 0m0.110s 00:09:45.014 sys 0m0.116s 00:09:45.014 14:15:04 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:45.014 14:15:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:45.014 ************************************ 00:09:45.014 END TEST nvme_startup 00:09:45.014 ************************************ 00:09:45.014 14:15:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:45.014 14:15:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:45.014 14:15:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:45.014 14:15:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.014 ************************************ 00:09:45.014 START TEST nvme_multi_secondary 00:09:45.014 ************************************ 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=68826 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=68827 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:45.014 14:15:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:49.202 Initializing NVMe Controllers 00:09:49.202 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.202 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.202 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.202 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.202 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:49.202 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:49.202 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:49.202 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:49.202 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:49.202 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:49.202 Initialization complete. Launching workers. 00:09:49.202 ======================================================== 00:09:49.202 Latency(us) 00:09:49.202 Device Information : IOPS MiB/s Average min max 00:09:49.202 PCIE (0000:00:10.0) NSID 1 from core 2: 2435.17 9.51 6567.59 1398.24 13313.08 00:09:49.202 PCIE (0000:00:11.0) NSID 1 from core 2: 2435.17 9.51 6578.94 1608.36 15323.68 00:09:49.203 PCIE (0000:00:13.0) NSID 1 from core 2: 2435.17 9.51 6579.00 1367.41 12961.60 00:09:49.203 PCIE (0000:00:12.0) NSID 1 from core 2: 2435.17 9.51 6578.40 1502.26 14677.11 00:09:49.203 PCIE (0000:00:12.0) NSID 2 from core 2: 2435.17 9.51 6580.00 1541.40 13424.90 00:09:49.203 PCIE (0000:00:12.0) NSID 3 from core 2: 2435.17 9.51 6580.13 1408.09 13183.47 00:09:49.203 ======================================================== 00:09:49.203 Total : 14611.04 57.07 6577.34 1367.41 15323.68 00:09:49.203 00:09:49.203 14:15:08 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 68826 00:09:49.203 Initializing NVMe Controllers 00:09:49.203 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.203 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.203 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.203 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.203 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:49.203 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:49.203 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:49.203 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:49.203 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:49.203 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:49.203 Initialization complete. Launching workers. 00:09:49.203 ======================================================== 00:09:49.203 Latency(us) 00:09:49.203 Device Information : IOPS MiB/s Average min max 00:09:49.203 PCIE (0000:00:10.0) NSID 1 from core 1: 5242.64 20.48 3050.11 1330.64 6896.54 00:09:49.203 PCIE (0000:00:11.0) NSID 1 from core 1: 5242.64 20.48 3051.29 1445.59 5889.47 00:09:49.203 PCIE (0000:00:13.0) NSID 1 from core 1: 5242.64 20.48 3051.35 1410.90 5812.09 00:09:49.203 PCIE (0000:00:12.0) NSID 1 from core 1: 5242.64 20.48 3051.28 1482.92 6192.08 00:09:49.203 PCIE (0000:00:12.0) NSID 2 from core 1: 5242.64 20.48 3051.25 1345.47 6725.60 00:09:49.203 PCIE (0000:00:12.0) NSID 3 from core 1: 5242.64 20.48 3051.20 1407.93 6843.68 00:09:49.203 ======================================================== 00:09:49.203 Total : 31455.82 122.87 3051.08 1330.64 6896.54 00:09:49.203 00:09:50.603 Initializing NVMe Controllers 00:09:50.603 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.603 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.603 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.603 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.603 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:50.603 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:50.603 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:50.603 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:50.603 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:50.603 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:50.603 Initialization complete. Launching workers. 00:09:50.603 ======================================================== 00:09:50.603 Latency(us) 00:09:50.603 Device Information : IOPS MiB/s Average min max 00:09:50.603 PCIE (0000:00:10.0) NSID 1 from core 0: 8143.74 31.81 1963.18 924.74 8567.56 00:09:50.603 PCIE (0000:00:11.0) NSID 1 from core 0: 8143.74 31.81 1964.20 948.95 8612.67 00:09:50.603 PCIE (0000:00:13.0) NSID 1 from core 0: 8143.74 31.81 1964.16 935.47 8848.13 00:09:50.603 PCIE (0000:00:12.0) NSID 1 from core 0: 8143.74 31.81 1964.12 944.08 8909.71 00:09:50.603 PCIE (0000:00:12.0) NSID 2 from core 0: 8143.74 31.81 1964.06 909.52 9077.12 00:09:50.603 PCIE (0000:00:12.0) NSID 3 from core 0: 8143.74 31.81 1964.01 895.51 9076.23 00:09:50.603 ======================================================== 00:09:50.603 Total : 48862.46 190.87 1963.96 895.51 9077.12 00:09:50.603 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 68827 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=68895 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=68896 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:50.603 14:15:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:53.891 Initializing NVMe Controllers 00:09:53.891 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.891 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:53.891 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:53.891 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:53.891 Initialization complete. Launching workers. 00:09:53.891 ======================================================== 00:09:53.891 Latency(us) 00:09:53.891 Device Information : IOPS MiB/s Average min max 00:09:53.891 PCIE (0000:00:10.0) NSID 1 from core 1: 5288.73 20.66 3023.43 1034.84 8668.78 00:09:53.891 PCIE (0000:00:11.0) NSID 1 from core 1: 5288.73 20.66 3024.80 1045.32 7658.66 00:09:53.891 PCIE (0000:00:13.0) NSID 1 from core 1: 5288.73 20.66 3024.73 1075.93 7724.66 00:09:53.891 PCIE (0000:00:12.0) NSID 1 from core 1: 5288.73 20.66 3024.66 1039.98 7337.32 00:09:53.891 PCIE (0000:00:12.0) NSID 2 from core 1: 5288.73 20.66 3024.71 1074.48 7442.37 00:09:53.891 PCIE (0000:00:12.0) NSID 3 from core 1: 5288.73 20.66 3024.62 1070.86 7797.16 00:09:53.891 ======================================================== 00:09:53.891 Total : 31732.40 123.95 3024.49 1034.84 8668.78 00:09:53.891 00:09:53.891 Initializing NVMe Controllers 00:09:53.891 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:53.891 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:53.891 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:53.891 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:53.891 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:53.891 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:53.891 Initialization complete. Launching workers. 00:09:53.891 ======================================================== 00:09:53.891 Latency(us) 00:09:53.891 Device Information : IOPS MiB/s Average min max 00:09:53.891 PCIE (0000:00:10.0) NSID 1 from core 0: 5667.87 22.14 2821.10 1013.47 5497.22 00:09:53.891 PCIE (0000:00:11.0) NSID 1 from core 0: 5667.87 22.14 2822.08 1024.75 5401.26 00:09:53.891 PCIE (0000:00:13.0) NSID 1 from core 0: 5667.87 22.14 2821.88 1010.91 5642.03 00:09:53.891 PCIE (0000:00:12.0) NSID 1 from core 0: 5667.87 22.14 2821.77 999.74 5398.71 00:09:53.891 PCIE (0000:00:12.0) NSID 2 from core 0: 5667.87 22.14 2821.62 1014.97 6020.71 00:09:53.891 PCIE (0000:00:12.0) NSID 3 from core 0: 5667.87 22.14 2821.57 995.28 5537.89 00:09:53.891 ======================================================== 00:09:53.891 Total : 34007.20 132.84 2821.67 995.28 6020.71 00:09:53.891 00:09:55.792 Initializing NVMe Controllers 00:09:55.792 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.792 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.792 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.792 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.792 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:55.792 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:55.792 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:55.792 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:55.792 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:55.792 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:55.792 Initialization complete. Launching workers. 00:09:55.792 ======================================================== 00:09:55.792 Latency(us) 00:09:55.792 Device Information : IOPS MiB/s Average min max 00:09:55.792 PCIE (0000:00:10.0) NSID 1 from core 2: 3564.65 13.92 4482.80 1020.58 13085.44 00:09:55.792 PCIE (0000:00:11.0) NSID 1 from core 2: 3564.65 13.92 4484.48 1009.75 13118.73 00:09:55.792 PCIE (0000:00:13.0) NSID 1 from core 2: 3564.65 13.92 4485.00 1011.24 13432.44 00:09:55.792 PCIE (0000:00:12.0) NSID 1 from core 2: 3564.65 13.92 4484.05 999.50 13477.93 00:09:55.792 PCIE (0000:00:12.0) NSID 2 from core 2: 3564.65 13.92 4484.64 906.49 13207.29 00:09:55.792 PCIE (0000:00:12.0) NSID 3 from core 2: 3564.65 13.92 4484.62 773.96 16528.75 00:09:55.792 ======================================================== 00:09:55.792 Total : 21387.91 83.55 4484.27 773.96 16528.75 00:09:55.793 00:09:55.793 ************************************ 00:09:55.793 END TEST nvme_multi_secondary 00:09:55.793 ************************************ 00:09:55.793 14:15:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 68895 00:09:55.793 14:15:15 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 68896 00:09:55.793 00:09:55.793 real 0m10.668s 00:09:55.793 user 0m18.646s 00:09:55.793 sys 0m0.920s 00:09:55.793 14:15:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.793 14:15:15 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:55.793 14:15:15 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:55.793 14:15:15 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:55.793 14:15:15 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/67834 ]] 00:09:55.793 14:15:15 nvme -- common/autotest_common.sh@1090 -- # kill 67834 00:09:55.793 14:15:15 nvme -- common/autotest_common.sh@1091 -- # wait 67834 00:09:55.793 [2024-07-26 14:15:15.457504] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.457637] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.457683] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.457722] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.461680] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.461784] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.461827] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.461935] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.465826] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.465952] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.465997] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.466035] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.468848] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.468925] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.468951] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:55.793 [2024-07-26 14:15:15.468973] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68769) is not found. Dropping the request. 00:09:56.052 [2024-07-26 14:15:15.753075] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:56.052 14:15:15 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:56.052 14:15:15 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:56.052 14:15:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:56.052 14:15:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:56.052 14:15:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.052 14:15:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.052 ************************************ 00:09:56.052 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:56.052 ************************************ 00:09:56.052 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:56.310 * Looking for test storage... 00:09:56.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:09:56.310 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69052 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69052 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69052 ']' 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.311 14:15:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:56.311 [2024-07-26 14:15:16.060513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:56.311 [2024-07-26 14:15:16.060966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69052 ] 00:09:56.569 [2024-07-26 14:15:16.254708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.826 [2024-07-26 14:15:16.478790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.826 [2024-07-26 14:15:16.478937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.826 [2024-07-26 14:15:16.479029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.826 [2024-07-26 14:15:16.479044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.759 nvme0n1 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_5Few1.txt 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.759 true 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1722003317 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69075 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.759 14:15:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.656 [2024-07-26 14:15:19.260992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:59.656 [2024-07-26 14:15:19.261387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:59.656 [2024-07-26 14:15:19.261419] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:59.656 [2024-07-26 14:15:19.261448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.656 [2024-07-26 14:15:19.263717] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:59.656 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69075 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69075 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69075 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_5Few1.txt 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_5Few1.txt 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69052 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69052 ']' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69052 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69052 00:09:59.656 killing process with pid 69052 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69052' 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69052 00:09:59.656 14:15:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69052 00:10:01.576 14:15:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:01.576 14:15:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:01.576 00:10:01.576 real 0m5.536s 00:10:01.576 user 0m19.027s 00:10:01.576 sys 0m0.570s 00:10:01.576 14:15:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:01.576 14:15:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:01.576 ************************************ 00:10:01.576 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:01.576 ************************************ 00:10:01.835 14:15:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:01.835 14:15:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:01.835 14:15:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:01.835 14:15:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:01.835 14:15:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.835 ************************************ 00:10:01.835 START TEST nvme_fio 00:10:01.835 ************************************ 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:01.835 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:01.835 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:02.094 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:02.094 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:02.353 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:02.353 14:15:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:02.353 14:15:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:02.611 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:02.611 fio-3.35 00:10:02.611 Starting 1 thread 00:10:05.896 00:10:05.896 test: (groupid=0, jobs=1): err= 0: pid=69221: Fri Jul 26 14:15:25 2024 00:10:05.896 read: IOPS=14.5k, BW=56.6MiB/s (59.4MB/s)(113MiB/2001msec) 00:10:05.896 slat (nsec): min=4310, max=46907, avg=6399.67, stdev=2291.97 00:10:05.896 clat (usec): min=484, max=9493, avg=4391.22, stdev=536.89 00:10:05.896 lat (usec): min=489, max=9498, avg=4397.62, stdev=537.64 00:10:05.896 clat percentiles (usec): 00:10:05.896 | 1.00th=[ 3720], 5.00th=[ 3818], 10.00th=[ 3884], 20.00th=[ 3982], 00:10:05.896 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4490], 00:10:05.896 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5145], 00:10:05.896 | 99.00th=[ 6063], 99.50th=[ 7439], 99.90th=[ 9110], 99.95th=[ 9110], 00:10:05.896 | 99.99th=[ 9372] 00:10:05.896 bw ( KiB/s): min=55312, max=60176, per=98.88%, avg=57314.67, stdev=2543.15, samples=3 00:10:05.897 iops : min=13828, max=15044, avg=14328.67, stdev=635.79, samples=3 00:10:05.897 write: IOPS=14.5k, BW=56.7MiB/s (59.4MB/s)(113MiB/2001msec); 0 zone resets 00:10:05.897 slat (nsec): min=4478, max=76224, avg=6654.19, stdev=2473.22 00:10:05.897 clat (usec): min=287, max=9360, avg=4401.68, stdev=560.04 00:10:05.897 lat (usec): min=293, max=9366, avg=4408.33, stdev=560.92 00:10:05.897 clat percentiles (usec): 00:10:05.897 | 1.00th=[ 3720], 5.00th=[ 3818], 10.00th=[ 3916], 20.00th=[ 3982], 00:10:05.897 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4293], 60.00th=[ 4490], 00:10:05.897 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 4948], 95.00th=[ 5145], 00:10:05.897 | 99.00th=[ 6521], 99.50th=[ 7832], 99.90th=[ 9110], 99.95th=[ 9110], 00:10:05.897 | 99.99th=[ 9372] 00:10:05.897 bw ( KiB/s): min=55216, max=59712, per=98.57%, avg=57213.33, stdev=2289.54, samples=3 00:10:05.897 iops : min=13804, max=14928, avg=14303.33, stdev=572.39, samples=3 00:10:05.897 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:05.897 lat (msec) : 2=0.06%, 4=20.77%, 10=79.15% 00:10:05.897 cpu : usr=99.00%, sys=0.00%, ctx=4, majf=0, minf=606 00:10:05.897 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:05.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:05.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:05.897 issued rwts: total=28997,29037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:05.897 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:05.897 00:10:05.897 Run status group 0 (all jobs): 00:10:05.897 READ: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:10:05.897 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=113MiB (119MB), run=2001-2001msec 00:10:05.897 ----------------------------------------------------- 00:10:05.897 Suppressions used: 00:10:05.897 count bytes template 00:10:05.897 1 32 /usr/src/fio/parse.c 00:10:05.897 1 8 libtcmalloc_minimal.so 00:10:05.897 ----------------------------------------------------- 00:10:05.897 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:05.897 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:06.156 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:06.156 14:15:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:06.156 14:15:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:06.414 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:06.414 fio-3.35 00:10:06.414 Starting 1 thread 00:10:09.700 00:10:09.700 test: (groupid=0, jobs=1): err= 0: pid=69282: Fri Jul 26 14:15:29 2024 00:10:09.700 read: IOPS=15.4k, BW=60.1MiB/s (63.0MB/s)(120MiB/2001msec) 00:10:09.700 slat (nsec): min=4421, max=62017, avg=6375.23, stdev=2554.21 00:10:09.700 clat (usec): min=420, max=10491, avg=4134.02, stdev=367.62 00:10:09.700 lat (usec): min=424, max=10503, avg=4140.39, stdev=367.97 00:10:09.700 clat percentiles (usec): 00:10:09.700 | 1.00th=[ 3589], 5.00th=[ 3720], 10.00th=[ 3785], 20.00th=[ 3851], 00:10:09.700 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:10:09.700 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4817], 00:10:09.700 | 99.00th=[ 5080], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 6194], 00:10:09.700 | 99.99th=[ 7570] 00:10:09.700 bw ( KiB/s): min=56552, max=62544, per=98.29%, avg=60501.33, stdev=3420.90, samples=3 00:10:09.700 iops : min=14138, max=15636, avg=15125.33, stdev=855.22, samples=3 00:10:09.700 write: IOPS=15.4k, BW=60.2MiB/s (63.1MB/s)(120MiB/2001msec); 0 zone resets 00:10:09.700 slat (nsec): min=4424, max=51096, avg=6549.06, stdev=2636.57 00:10:09.700 clat (usec): min=270, max=12273, avg=4148.34, stdev=463.63 00:10:09.700 lat (usec): min=276, max=12283, avg=4154.89, stdev=464.08 00:10:09.700 clat percentiles (usec): 00:10:09.700 | 1.00th=[ 3589], 5.00th=[ 3720], 10.00th=[ 3785], 20.00th=[ 3851], 00:10:09.700 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:10:09.700 | 70.00th=[ 4228], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 4817], 00:10:09.700 | 99.00th=[ 5080], 99.50th=[ 5211], 99.90th=[10945], 99.95th=[11600], 00:10:09.700 | 99.99th=[12125] 00:10:09.700 bw ( KiB/s): min=56856, max=61752, per=97.60%, avg=60117.33, stdev=2824.40, samples=3 00:10:09.700 iops : min=14214, max=15438, avg=15029.33, stdev=706.10, samples=3 00:10:09.700 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:09.700 lat (msec) : 2=0.06%, 4=41.33%, 10=58.50%, 20=0.08% 00:10:09.700 cpu : usr=98.85%, sys=0.20%, ctx=4, majf=0, minf=607 00:10:09.700 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:09.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:09.700 issued rwts: total=30792,30814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.700 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:09.700 00:10:09.700 Run status group 0 (all jobs): 00:10:09.700 READ: bw=60.1MiB/s (63.0MB/s), 60.1MiB/s-60.1MiB/s (63.0MB/s-63.0MB/s), io=120MiB (126MB), run=2001-2001msec 00:10:09.700 WRITE: bw=60.2MiB/s (63.1MB/s), 60.2MiB/s-60.2MiB/s (63.1MB/s-63.1MB/s), io=120MiB (126MB), run=2001-2001msec 00:10:09.700 ----------------------------------------------------- 00:10:09.700 Suppressions used: 00:10:09.700 count bytes template 00:10:09.700 1 32 /usr/src/fio/parse.c 00:10:09.701 1 8 libtcmalloc_minimal.so 00:10:09.701 ----------------------------------------------------- 00:10:09.701 00:10:09.701 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:09.701 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:09.701 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:09.701 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:09.959 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:09.959 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:10.219 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:10.219 14:15:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:10.219 14:15:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:10.477 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:10.477 fio-3.35 00:10:10.477 Starting 1 thread 00:10:13.765 00:10:13.765 test: (groupid=0, jobs=1): err= 0: pid=69346: Fri Jul 26 14:15:33 2024 00:10:13.765 read: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(119MiB/2001msec) 00:10:13.765 slat (nsec): min=4119, max=76955, avg=6171.74, stdev=2668.01 00:10:13.765 clat (usec): min=275, max=9404, avg=4186.22, stdev=581.00 00:10:13.765 lat (usec): min=280, max=9481, avg=4192.39, stdev=581.66 00:10:13.765 clat percentiles (usec): 00:10:13.765 | 1.00th=[ 3228], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3687], 00:10:13.765 | 30.00th=[ 3785], 40.00th=[ 3949], 50.00th=[ 4178], 60.00th=[ 4359], 00:10:13.765 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5014], 00:10:13.765 | 99.00th=[ 5669], 99.50th=[ 6325], 99.90th=[ 8455], 99.95th=[ 8586], 00:10:13.765 | 99.99th=[ 9372] 00:10:13.765 bw ( KiB/s): min=59184, max=63288, per=100.00%, avg=61328.00, stdev=2058.18, samples=3 00:10:13.765 iops : min=14796, max=15822, avg=15332.00, stdev=514.54, samples=3 00:10:13.765 write: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(119MiB/2001msec); 0 zone resets 00:10:13.765 slat (nsec): min=4097, max=57773, avg=6338.70, stdev=2716.62 00:10:13.765 clat (usec): min=266, max=10636, avg=4192.14, stdev=610.00 00:10:13.765 lat (usec): min=271, max=10644, avg=4198.48, stdev=610.63 00:10:13.765 clat percentiles (usec): 00:10:13.765 | 1.00th=[ 3261], 5.00th=[ 3458], 10.00th=[ 3556], 20.00th=[ 3687], 00:10:13.765 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4178], 60.00th=[ 4359], 00:10:13.765 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4817], 95.00th=[ 5014], 00:10:13.765 | 99.00th=[ 5669], 99.50th=[ 6783], 99.90th=[ 9765], 99.95th=[10159], 00:10:13.765 | 99.99th=[10552] 00:10:13.765 bw ( KiB/s): min=58104, max=63568, per=99.88%, avg=60882.67, stdev=2733.20, samples=3 00:10:13.765 iops : min=14526, max=15892, avg=15220.67, stdev=683.30, samples=3 00:10:13.765 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:13.765 lat (msec) : 2=0.06%, 4=42.70%, 10=57.16%, 20=0.04% 00:10:13.765 cpu : usr=98.90%, sys=0.15%, ctx=3, majf=0, minf=606 00:10:13.765 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:13.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.765 issued rwts: total=30431,30493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.765 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.765 00:10:13.765 Run status group 0 (all jobs): 00:10:13.765 READ: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=119MiB (125MB), run=2001-2001msec 00:10:13.765 WRITE: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=119MiB (125MB), run=2001-2001msec 00:10:13.765 ----------------------------------------------------- 00:10:13.765 Suppressions used: 00:10:13.765 count bytes template 00:10:13.765 1 32 /usr/src/fio/parse.c 00:10:13.765 1 8 libtcmalloc_minimal.so 00:10:13.765 ----------------------------------------------------- 00:10:13.765 00:10:13.765 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:13.765 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:13.765 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:13.765 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:14.023 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:14.023 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:14.281 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:14.281 14:15:33 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:14.281 14:15:33 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:14.540 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:14.540 fio-3.35 00:10:14.540 Starting 1 thread 00:10:18.736 00:10:18.736 test: (groupid=0, jobs=1): err= 0: pid=69409: Fri Jul 26 14:15:38 2024 00:10:18.736 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec) 00:10:18.736 slat (nsec): min=4221, max=68439, avg=5969.02, stdev=2671.56 00:10:18.737 clat (usec): min=373, max=9706, avg=4040.16, stdev=533.65 00:10:18.737 lat (usec): min=379, max=9751, avg=4046.13, stdev=534.35 00:10:18.737 clat percentiles (usec): 00:10:18.737 | 1.00th=[ 3294], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3589], 00:10:18.737 | 30.00th=[ 3720], 40.00th=[ 3818], 50.00th=[ 3982], 60.00th=[ 4113], 00:10:18.737 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4817], 00:10:18.737 | 99.00th=[ 5669], 99.50th=[ 6390], 99.90th=[ 8291], 99.95th=[ 8586], 00:10:18.737 | 99.99th=[ 9503] 00:10:18.737 bw ( KiB/s): min=59056, max=67784, per=99.61%, avg=62874.67, stdev=4465.05, samples=3 00:10:18.737 iops : min=14764, max=16946, avg=15718.67, stdev=1116.26, samples=3 00:10:18.737 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec); 0 zone resets 00:10:18.737 slat (nsec): min=4249, max=55299, avg=6154.45, stdev=2782.95 00:10:18.737 clat (usec): min=329, max=10088, avg=4036.82, stdev=555.84 00:10:18.737 lat (usec): min=335, max=10098, avg=4042.98, stdev=556.54 00:10:18.737 clat percentiles (usec): 00:10:18.737 | 1.00th=[ 3294], 5.00th=[ 3425], 10.00th=[ 3490], 20.00th=[ 3589], 00:10:18.737 | 30.00th=[ 3687], 40.00th=[ 3818], 50.00th=[ 3949], 60.00th=[ 4113], 00:10:18.737 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4817], 00:10:18.737 | 99.00th=[ 5538], 99.50th=[ 6325], 99.90th=[ 9634], 99.95th=[ 9765], 00:10:18.737 | 99.99th=[ 9896] 00:10:18.737 bw ( KiB/s): min=58464, max=66928, per=98.96%, avg=62517.33, stdev=4243.30, samples=3 00:10:18.737 iops : min=14616, max=16732, avg=15629.33, stdev=1060.82, samples=3 00:10:18.737 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:18.737 lat (msec) : 2=0.04%, 4=52.53%, 10=47.39%, 20=0.01% 00:10:18.737 cpu : usr=99.10%, sys=0.00%, ctx=2, majf=0, minf=604 00:10:18.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.737 issued rwts: total=31577,31604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.737 00:10:18.737 Run status group 0 (all jobs): 00:10:18.737 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:18.737 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:10:18.997 ----------------------------------------------------- 00:10:18.997 Suppressions used: 00:10:18.997 count bytes template 00:10:18.997 1 32 /usr/src/fio/parse.c 00:10:18.997 1 8 libtcmalloc_minimal.so 00:10:18.997 ----------------------------------------------------- 00:10:18.997 00:10:18.997 14:15:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.997 14:15:38 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:18.997 00:10:18.997 real 0m17.221s 00:10:18.997 user 0m14.028s 00:10:18.997 sys 0m1.587s 00:10:18.997 14:15:38 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.997 14:15:38 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:18.997 ************************************ 00:10:18.997 END TEST nvme_fio 00:10:18.997 ************************************ 00:10:18.997 00:10:18.997 real 1m30.152s 00:10:18.997 user 3m43.135s 00:10:18.997 sys 0m13.523s 00:10:18.997 14:15:38 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:18.998 14:15:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.998 ************************************ 00:10:18.998 END TEST nvme 00:10:18.998 ************************************ 00:10:18.998 14:15:38 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:10:18.998 14:15:38 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:18.998 14:15:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:18.998 14:15:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:18.998 14:15:38 -- common/autotest_common.sh@10 -- # set +x 00:10:18.998 ************************************ 00:10:18.998 START TEST nvme_scc 00:10:18.998 ************************************ 00:10:18.998 14:15:38 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:19.257 * Looking for test storage... 00:10:19.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:19.257 14:15:38 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.257 14:15:38 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.257 14:15:38 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.257 14:15:38 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.257 14:15:38 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.257 14:15:38 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.257 14:15:38 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.257 14:15:38 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:19.257 14:15:38 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:19.257 14:15:38 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:19.257 14:15:38 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:19.257 14:15:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:19.257 14:15:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:19.257 14:15:38 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:19.257 14:15:38 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:19.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:19.773 Waiting for block devices as requested 00:10:19.773 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.773 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:20.031 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.308 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:25.308 14:15:44 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:25.308 14:15:44 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:25.308 14:15:44 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:25.308 14:15:44 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:25.308 14:15:44 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:25.308 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.309 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.310 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.311 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:25.312 14:15:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.313 14:15:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:25.313 14:15:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:25.313 14:15:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:25.313 14:15:44 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:25.313 14:15:44 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:25.313 14:15:44 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:25.313 14:15:44 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:25.313 14:15:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.314 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.315 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:25.316 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:25.317 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:25.318 14:15:44 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:25.318 14:15:44 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:25.318 14:15:44 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:25.318 14:15:44 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.318 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.319 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.320 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:25.321 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:25.322 14:15:44 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:25.322 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.323 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.324 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:25.325 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:25.587 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:25.588 14:15:45 nvme_scc -- scripts/common.sh@15 -- # local i 00:10:25.588 14:15:45 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:25.588 14:15:45 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:25.588 14:15:45 nvme_scc -- scripts/common.sh@24 -- # return 0 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:25.588 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:25.589 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.590 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:25.591 14:15:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:25.591 14:15:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:10:25.592 14:15:45 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:10:25.592 14:15:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:25.592 14:15:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:25.592 14:15:45 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.160 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.728 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.728 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.728 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.728 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.728 14:15:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:26.728 14:15:46 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:26.728 14:15:46 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.728 14:15:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:26.728 ************************************ 00:10:26.728 START TEST nvme_simple_copy 00:10:26.728 ************************************ 00:10:26.728 14:15:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:26.987 Initializing NVMe Controllers 00:10:26.987 Attaching to 0000:00:10.0 00:10:26.987 Controller supports SCC. Attached to 0000:00:10.0 00:10:26.987 Namespace ID: 1 size: 6GB 00:10:26.987 Initialization complete. 00:10:26.987 00:10:26.987 Controller QEMU NVMe Ctrl (12340 ) 00:10:26.987 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:26.987 Namespace Block Size:4096 00:10:26.987 Writing LBAs 0 to 63 with Random Data 00:10:26.987 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:26.987 LBAs matching Written Data: 64 00:10:26.987 00:10:26.987 real 0m0.314s 00:10:26.987 user 0m0.118s 00:10:26.987 sys 0m0.094s 00:10:26.987 14:15:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.987 ************************************ 00:10:26.987 14:15:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:26.987 END TEST nvme_simple_copy 00:10:26.987 ************************************ 00:10:27.247 00:10:27.247 real 0m8.066s 00:10:27.247 user 0m1.343s 00:10:27.247 sys 0m1.693s 00:10:27.247 14:15:46 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:27.247 14:15:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:27.247 ************************************ 00:10:27.247 END TEST nvme_scc 00:10:27.247 ************************************ 00:10:27.247 14:15:46 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:10:27.247 14:15:46 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:10:27.247 14:15:46 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:10:27.247 14:15:46 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:10:27.247 14:15:46 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:27.247 14:15:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:27.247 14:15:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:27.247 14:15:46 -- common/autotest_common.sh@10 -- # set +x 00:10:27.247 ************************************ 00:10:27.247 START TEST nvme_fdp 00:10:27.247 ************************************ 00:10:27.247 14:15:46 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:27.247 * Looking for test storage... 00:10:27.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:27.247 14:15:46 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.247 14:15:46 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.247 14:15:46 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.247 14:15:46 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.247 14:15:46 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.247 14:15:46 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.247 14:15:46 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.247 14:15:46 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:27.247 14:15:46 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:27.247 14:15:46 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:27.247 14:15:46 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.247 14:15:46 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:27.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.766 Waiting for block devices as requested 00:10:27.766 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.025 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.025 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.025 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.351 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:33.351 14:15:52 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:33.351 14:15:52 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:33.351 14:15:52 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:33.351 14:15:52 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:33.351 14:15:52 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:33.351 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:33.352 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:33.353 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.354 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.355 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:33.356 14:15:52 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:33.356 14:15:52 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:33.356 14:15:52 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:33.356 14:15:52 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:33.356 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:33.357 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:33.358 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:52 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:33.359 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.360 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:33.361 14:15:53 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:33.361 14:15:53 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:33.361 14:15:53 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:33.361 14:15:53 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.361 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.362 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.363 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:33.627 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.628 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:33.629 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.630 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:33.631 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:33.632 14:15:53 nvme_fdp -- scripts/common.sh@15 -- # local i 00:10:33.632 14:15:53 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:33.632 14:15:53 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:33.632 14:15:53 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:33.632 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:33.633 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:33.634 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:33.635 14:15:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:10:33.635 14:15:53 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:10:33.636 14:15:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:33.636 14:15:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:33.636 14:15:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:34.205 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.774 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.774 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.774 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.774 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:35.033 14:15:54 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:35.033 14:15:54 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:35.033 14:15:54 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.033 14:15:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:35.033 ************************************ 00:10:35.033 START TEST nvme_flexible_data_placement 00:10:35.033 ************************************ 00:10:35.033 14:15:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:35.292 Initializing NVMe Controllers 00:10:35.292 Attaching to 0000:00:13.0 00:10:35.292 Controller supports FDP Attached to 0000:00:13.0 00:10:35.292 Namespace ID: 1 Endurance Group ID: 1 00:10:35.292 Initialization complete. 00:10:35.292 00:10:35.292 ================================== 00:10:35.292 == FDP tests for Namespace: #01 == 00:10:35.292 ================================== 00:10:35.292 00:10:35.292 Get Feature: FDP: 00:10:35.292 ================= 00:10:35.292 Enabled: Yes 00:10:35.292 FDP configuration Index: 0 00:10:35.292 00:10:35.292 FDP configurations log page 00:10:35.292 =========================== 00:10:35.292 Number of FDP configurations: 1 00:10:35.292 Version: 0 00:10:35.293 Size: 112 00:10:35.293 FDP Configuration Descriptor: 0 00:10:35.293 Descriptor Size: 96 00:10:35.293 Reclaim Group Identifier format: 2 00:10:35.293 FDP Volatile Write Cache: Not Present 00:10:35.293 FDP Configuration: Valid 00:10:35.293 Vendor Specific Size: 0 00:10:35.293 Number of Reclaim Groups: 2 00:10:35.293 Number of Recalim Unit Handles: 8 00:10:35.293 Max Placement Identifiers: 128 00:10:35.293 Number of Namespaces Suppprted: 256 00:10:35.293 Reclaim unit Nominal Size: 6000000 bytes 00:10:35.293 Estimated Reclaim Unit Time Limit: Not Reported 00:10:35.293 RUH Desc #000: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #001: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #002: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #003: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #004: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #005: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #006: RUH Type: Initially Isolated 00:10:35.293 RUH Desc #007: RUH Type: Initially Isolated 00:10:35.293 00:10:35.293 FDP reclaim unit handle usage log page 00:10:35.293 ====================================== 00:10:35.293 Number of Reclaim Unit Handles: 8 00:10:35.293 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:35.293 RUH Usage Desc #001: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #002: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #003: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #004: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #005: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #006: RUH Attributes: Unused 00:10:35.293 RUH Usage Desc #007: RUH Attributes: Unused 00:10:35.293 00:10:35.293 FDP statistics log page 00:10:35.293 ======================= 00:10:35.293 Host bytes with metadata written: 842330112 00:10:35.293 Media bytes with metadata written: 842424320 00:10:35.293 Media bytes erased: 0 00:10:35.293 00:10:35.293 FDP Reclaim unit handle status 00:10:35.293 ============================== 00:10:35.293 Number of RUHS descriptors: 2 00:10:35.293 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003cb1 00:10:35.293 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:35.293 00:10:35.293 FDP write on placement id: 0 success 00:10:35.293 00:10:35.293 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:35.293 00:10:35.293 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:35.293 00:10:35.293 Get Feature: FDP Events for Placement handle: #0 00:10:35.293 ======================== 00:10:35.293 Number of FDP Events: 6 00:10:35.293 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:35.293 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:35.293 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:35.293 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:35.293 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:35.293 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:35.293 00:10:35.293 FDP events log page 00:10:35.293 =================== 00:10:35.293 Number of FDP events: 1 00:10:35.293 FDP Event #0: 00:10:35.293 Event Type: RU Not Written to Capacity 00:10:35.293 Placement Identifier: Valid 00:10:35.293 NSID: Valid 00:10:35.293 Location: Valid 00:10:35.293 Placement Identifier: 0 00:10:35.293 Event Timestamp: 8 00:10:35.293 Namespace Identifier: 1 00:10:35.293 Reclaim Group Identifier: 0 00:10:35.293 Reclaim Unit Handle Identifier: 0 00:10:35.293 00:10:35.293 FDP test passed 00:10:35.293 00:10:35.293 real 0m0.292s 00:10:35.293 user 0m0.105s 00:10:35.293 sys 0m0.085s 00:10:35.293 14:15:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.293 ************************************ 00:10:35.293 END TEST nvme_flexible_data_placement 00:10:35.293 ************************************ 00:10:35.293 14:15:54 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:35.293 00:10:35.293 real 0m8.137s 00:10:35.293 user 0m1.414s 00:10:35.293 sys 0m1.662s 00:10:35.293 14:15:54 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.293 14:15:54 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:35.293 ************************************ 00:10:35.293 END TEST nvme_fdp 00:10:35.293 ************************************ 00:10:35.293 14:15:54 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:10:35.293 14:15:54 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:35.293 14:15:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:35.293 14:15:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.293 14:15:54 -- common/autotest_common.sh@10 -- # set +x 00:10:35.293 ************************************ 00:10:35.293 START TEST nvme_rpc 00:10:35.293 ************************************ 00:10:35.293 14:15:54 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:35.553 * Looking for test storage... 00:10:35.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:10:35.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=70750 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:35.553 14:15:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 70750 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 70750 ']' 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.553 14:15:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.553 [2024-07-26 14:15:55.260795] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.553 [2024-07-26 14:15:55.260977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70750 ] 00:10:35.812 [2024-07-26 14:15:55.431951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.071 [2024-07-26 14:15:55.658862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.071 [2024-07-26 14:15:55.658866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.640 14:15:56 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.640 14:15:56 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:36.640 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:36.899 Nvme0n1 00:10:36.899 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:36.899 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:37.158 request: 00:10:37.158 { 00:10:37.158 "bdev_name": "Nvme0n1", 00:10:37.158 "filename": "non_existing_file", 00:10:37.158 "method": "bdev_nvme_apply_firmware", 00:10:37.158 "req_id": 1 00:10:37.158 } 00:10:37.158 Got JSON-RPC error response 00:10:37.158 response: 00:10:37.158 { 00:10:37.158 "code": -32603, 00:10:37.158 "message": "open file failed." 00:10:37.158 } 00:10:37.158 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:37.158 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:37.158 14:15:56 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:37.417 14:15:57 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:37.417 14:15:57 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 70750 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 70750 ']' 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 70750 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70750 00:10:37.417 killing process with pid 70750 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70750' 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@969 -- # kill 70750 00:10:37.417 14:15:57 nvme_rpc -- common/autotest_common.sh@974 -- # wait 70750 00:10:39.329 00:10:39.329 real 0m3.941s 00:10:39.329 user 0m7.478s 00:10:39.329 sys 0m0.552s 00:10:39.329 14:15:58 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.329 14:15:58 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.329 ************************************ 00:10:39.329 END TEST nvme_rpc 00:10:39.329 ************************************ 00:10:39.329 14:15:58 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:39.329 14:15:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:39.329 14:15:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.329 14:15:58 -- common/autotest_common.sh@10 -- # set +x 00:10:39.329 ************************************ 00:10:39.329 START TEST nvme_rpc_timeouts 00:10:39.329 ************************************ 00:10:39.329 14:15:58 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:39.329 * Looking for test storage... 00:10:39.329 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_70821 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_70821 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=70845 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:39.329 14:15:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 70845 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 70845 ']' 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:39.329 14:15:59 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:39.592 [2024-07-26 14:15:59.191777] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:39.592 [2024-07-26 14:15:59.191995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:10:39.850 [2024-07-26 14:15:59.366400] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:39.850 [2024-07-26 14:15:59.525649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.850 [2024-07-26 14:15:59.525657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.440 14:16:00 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.440 14:16:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:40.440 Checking default timeout settings: 00:10:40.440 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:40.440 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:41.007 Making settings changes with rpc: 00:10:41.007 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:41.007 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:41.265 Check default vs. modified settings: 00:10:41.265 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:41.265 14:16:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_70821 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_70821 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:41.523 Setting action_on_timeout is changed as expected. 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_70821 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_70821 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:41.523 Setting timeout_us is changed as expected. 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:41.523 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_70821 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_70821 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:41.524 Setting timeout_admin_us is changed as expected. 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_70821 /tmp/settings_modified_70821 00:10:41.524 14:16:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 70845 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 70845 ']' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 70845 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70845 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70845' 00:10:41.524 killing process with pid 70845 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 70845 00:10:41.524 14:16:01 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 70845 00:10:43.426 RPC TIMEOUT SETTING TEST PASSED. 00:10:43.426 14:16:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:43.426 00:10:43.426 real 0m4.065s 00:10:43.426 user 0m7.843s 00:10:43.426 sys 0m0.571s 00:10:43.426 14:16:03 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.426 14:16:03 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:43.426 ************************************ 00:10:43.426 END TEST nvme_rpc_timeouts 00:10:43.426 ************************************ 00:10:43.426 14:16:03 -- spdk/autotest.sh@247 -- # uname -s 00:10:43.426 14:16:03 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:10:43.426 14:16:03 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:43.426 14:16:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:43.426 14:16:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.426 14:16:03 -- common/autotest_common.sh@10 -- # set +x 00:10:43.426 ************************************ 00:10:43.426 START TEST sw_hotplug 00:10:43.426 ************************************ 00:10:43.426 14:16:03 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:43.426 * Looking for test storage... 00:10:43.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:43.684 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:43.943 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:43.943 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.943 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.943 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:43.943 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:44.201 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:44.201 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:44.201 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:44.201 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@230 -- # local class 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@15 -- # local i 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:10:44.202 14:16:03 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:44.202 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:44.202 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:44.202 14:16:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:44.720 Waiting for block devices as requested 00:10:44.720 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.720 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.979 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:44.979 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:50.284 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:50.284 14:16:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:50.284 14:16:09 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.542 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:50.542 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.542 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:50.811 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:51.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.070 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:51.329 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:51.329 14:16:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.329 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=71701 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:51.330 14:16:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:51.330 14:16:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:51.330 14:16:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:51.330 14:16:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:51.330 14:16:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:51.330 14:16:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:51.588 Initializing NVMe Controllers 00:10:51.588 Attaching to 0000:00:10.0 00:10:51.588 Attaching to 0000:00:11.0 00:10:51.588 Attached to 0000:00:10.0 00:10:51.588 Attached to 0000:00:11.0 00:10:51.588 Initialization complete. Starting I/O... 00:10:51.589 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:51.589 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:51.589 00:10:52.526 QEMU NVMe Ctrl (12340 ): 1136 I/Os completed (+1136) 00:10:52.526 QEMU NVMe Ctrl (12341 ): 1251 I/Os completed (+1251) 00:10:52.526 00:10:53.463 QEMU NVMe Ctrl (12340 ): 2529 I/Os completed (+1393) 00:10:53.463 QEMU NVMe Ctrl (12341 ): 2750 I/Os completed (+1499) 00:10:53.463 00:10:54.837 QEMU NVMe Ctrl (12340 ): 4353 I/Os completed (+1824) 00:10:54.837 QEMU NVMe Ctrl (12341 ): 4641 I/Os completed (+1891) 00:10:54.837 00:10:55.773 QEMU NVMe Ctrl (12340 ): 6061 I/Os completed (+1708) 00:10:55.773 QEMU NVMe Ctrl (12341 ): 6472 I/Os completed (+1831) 00:10:55.773 00:10:56.709 QEMU NVMe Ctrl (12340 ): 7837 I/Os completed (+1776) 00:10:56.709 QEMU NVMe Ctrl (12341 ): 8356 I/Os completed (+1884) 00:10:56.709 00:10:57.306 14:16:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:57.306 14:16:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.306 14:16:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.306 [2024-07-26 14:16:16.964972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:57.306 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:57.306 [2024-07-26 14:16:16.966851] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.966954] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.966987] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.967013] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:57.306 [2024-07-26 14:16:16.969880] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.969945] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.969969] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.969991] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 14:16:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:57.306 14:16:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:57.306 [2024-07-26 14:16:16.990554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:57.306 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:57.306 [2024-07-26 14:16:16.992356] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.992413] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.992445] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.992468] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:57.306 [2024-07-26 14:16:16.994975] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.995024] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.995050] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 [2024-07-26 14:16:16.995070] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:57.306 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:57.307 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:57.307 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:57.307 EAL: Scan for (pci) bus failed. 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:57.565 Attaching to 0000:00:10.0 00:10:57.565 Attached to 0000:00:10.0 00:10:57.565 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:57.565 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:57.565 14:16:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:57.565 Attaching to 0000:00:11.0 00:10:57.565 Attached to 0000:00:11.0 00:10:58.500 QEMU NVMe Ctrl (12340 ): 1890 I/Os completed (+1890) 00:10:58.501 QEMU NVMe Ctrl (12341 ): 1715 I/Os completed (+1715) 00:10:58.501 00:10:59.435 QEMU NVMe Ctrl (12340 ): 3585 I/Os completed (+1695) 00:10:59.435 QEMU NVMe Ctrl (12341 ): 3520 I/Os completed (+1805) 00:10:59.435 00:11:00.809 QEMU NVMe Ctrl (12340 ): 5521 I/Os completed (+1936) 00:11:00.809 QEMU NVMe Ctrl (12341 ): 5512 I/Os completed (+1992) 00:11:00.809 00:11:01.745 QEMU NVMe Ctrl (12340 ): 7413 I/Os completed (+1892) 00:11:01.745 QEMU NVMe Ctrl (12341 ): 7452 I/Os completed (+1940) 00:11:01.745 00:11:02.681 QEMU NVMe Ctrl (12340 ): 9333 I/Os completed (+1920) 00:11:02.681 QEMU NVMe Ctrl (12341 ): 9413 I/Os completed (+1961) 00:11:02.681 00:11:03.615 QEMU NVMe Ctrl (12340 ): 11105 I/Os completed (+1772) 00:11:03.615 QEMU NVMe Ctrl (12341 ): 11305 I/Os completed (+1892) 00:11:03.615 00:11:04.550 QEMU NVMe Ctrl (12340 ): 12953 I/Os completed (+1848) 00:11:04.550 QEMU NVMe Ctrl (12341 ): 13222 I/Os completed (+1917) 00:11:04.550 00:11:05.486 QEMU NVMe Ctrl (12340 ): 14697 I/Os completed (+1744) 00:11:05.486 QEMU NVMe Ctrl (12341 ): 15022 I/Os completed (+1800) 00:11:05.486 00:11:06.447 QEMU NVMe Ctrl (12340 ): 16429 I/Os completed (+1732) 00:11:06.447 QEMU NVMe Ctrl (12341 ): 16867 I/Os completed (+1845) 00:11:06.447 00:11:07.826 QEMU NVMe Ctrl (12340 ): 18325 I/Os completed (+1896) 00:11:07.826 QEMU NVMe Ctrl (12341 ): 18814 I/Os completed (+1947) 00:11:07.826 00:11:08.763 QEMU NVMe Ctrl (12340 ): 20181 I/Os completed (+1856) 00:11:08.763 QEMU NVMe Ctrl (12341 ): 20718 I/Os completed (+1904) 00:11:08.763 00:11:09.700 QEMU NVMe Ctrl (12340 ): 22017 I/Os completed (+1836) 00:11:09.700 QEMU NVMe Ctrl (12341 ): 22599 I/Os completed (+1881) 00:11:09.700 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.700 [2024-07-26 14:16:29.303412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:09.700 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:09.700 [2024-07-26 14:16:29.305446] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.305526] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.305553] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.305578] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:09.700 [2024-07-26 14:16:29.308616] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.308687] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.308710] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.308729] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.700 [2024-07-26 14:16:29.332521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:09.700 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:09.700 [2024-07-26 14:16:29.334586] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.334693] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.334727] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.334750] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:09.700 [2024-07-26 14:16:29.337393] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.337462] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.337487] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 [2024-07-26 14:16:29.337508] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:09.700 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:09.700 EAL: Scan for (pci) bus failed. 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.700 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:09.959 Attaching to 0000:00:10.0 00:11:09.959 Attached to 0000:00:10.0 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.959 14:16:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:09.959 Attaching to 0000:00:11.0 00:11:09.959 Attached to 0000:00:11.0 00:11:10.526 QEMU NVMe Ctrl (12340 ): 1196 I/Os completed (+1196) 00:11:10.526 QEMU NVMe Ctrl (12341 ): 1019 I/Os completed (+1019) 00:11:10.526 00:11:11.460 QEMU NVMe Ctrl (12340 ): 2992 I/Os completed (+1796) 00:11:11.460 QEMU NVMe Ctrl (12341 ): 2856 I/Os completed (+1837) 00:11:11.460 00:11:12.838 QEMU NVMe Ctrl (12340 ): 4892 I/Os completed (+1900) 00:11:12.838 QEMU NVMe Ctrl (12341 ): 4794 I/Os completed (+1938) 00:11:12.838 00:11:13.774 QEMU NVMe Ctrl (12340 ): 6796 I/Os completed (+1904) 00:11:13.774 QEMU NVMe Ctrl (12341 ): 6734 I/Os completed (+1940) 00:11:13.774 00:11:14.715 QEMU NVMe Ctrl (12340 ): 8680 I/Os completed (+1884) 00:11:14.715 QEMU NVMe Ctrl (12341 ): 8706 I/Os completed (+1972) 00:11:14.715 00:11:15.653 QEMU NVMe Ctrl (12340 ): 10536 I/Os completed (+1856) 00:11:15.653 QEMU NVMe Ctrl (12341 ): 10604 I/Os completed (+1898) 00:11:15.653 00:11:16.589 QEMU NVMe Ctrl (12340 ): 12392 I/Os completed (+1856) 00:11:16.589 QEMU NVMe Ctrl (12341 ): 12534 I/Os completed (+1930) 00:11:16.589 00:11:17.526 QEMU NVMe Ctrl (12340 ): 14316 I/Os completed (+1924) 00:11:17.526 QEMU NVMe Ctrl (12341 ): 14490 I/Os completed (+1956) 00:11:17.526 00:11:18.463 QEMU NVMe Ctrl (12340 ): 16220 I/Os completed (+1904) 00:11:18.463 QEMU NVMe Ctrl (12341 ): 16448 I/Os completed (+1958) 00:11:18.463 00:11:19.842 QEMU NVMe Ctrl (12340 ): 18120 I/Os completed (+1900) 00:11:19.842 QEMU NVMe Ctrl (12341 ): 18418 I/Os completed (+1970) 00:11:19.842 00:11:20.779 QEMU NVMe Ctrl (12340 ): 19869 I/Os completed (+1749) 00:11:20.779 QEMU NVMe Ctrl (12341 ): 20308 I/Os completed (+1890) 00:11:20.779 00:11:21.715 QEMU NVMe Ctrl (12340 ): 21773 I/Os completed (+1904) 00:11:21.715 QEMU NVMe Ctrl (12341 ): 22317 I/Os completed (+2009) 00:11:21.715 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.973 [2024-07-26 14:16:41.662680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:21.973 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:21.973 [2024-07-26 14:16:41.664857] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.664927] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.664954] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.664999] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:21.973 [2024-07-26 14:16:41.668098] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.668158] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.668184] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 [2024-07-26 14:16:41.668206] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.973 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.973 [2024-07-26 14:16:41.690753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:21.974 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:21.974 [2024-07-26 14:16:41.692583] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.692652] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.692697] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.692721] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:21.974 [2024-07-26 14:16:41.695314] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.695367] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.695410] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 [2024-07-26 14:16:41.695430] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.974 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:21.974 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.233 Attaching to 0000:00:10.0 00:11:22.233 Attached to 0000:00:10.0 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.233 14:16:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:22.233 Attaching to 0000:00:11.0 00:11:22.233 Attached to 0000:00:11.0 00:11:22.233 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:22.233 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:22.233 [2024-07-26 14:16:41.974772] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:34.448 14:16:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:34.448 14:16:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.448 14:16:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.00 00:11:34.448 14:16:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.00 00:11:34.448 14:16:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:34.448 14:16:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:11:34.448 14:16:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:11:34.448 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 14:16:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 71701 00:11:41.011 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (71701) - No such process 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 71701 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=72245 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:41.011 14:16:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 72245 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 72245 ']' 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.011 14:16:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.011 [2024-07-26 14:17:00.095451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:41.011 [2024-07-26 14:17:00.095610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72245 ] 00:11:41.011 [2024-07-26 14:17:00.257940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.012 [2024-07-26 14:17:00.453399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:41.579 14:17:01 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:41.579 14:17:01 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.140 [2024-07-26 14:17:07.182481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:48.140 [2024-07-26 14:17:07.185253] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.185307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.185346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.185374] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.185394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.185409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.185428] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.185442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.185458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.185473] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.185491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.185505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.140 [2024-07-26 14:17:07.582501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:48.140 [2024-07-26 14:17:07.585379] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.585434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.585456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.585486] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.585502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.585518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.585533] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.585549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.585564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 [2024-07-26 14:17:07.585580] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.140 [2024-07-26 14:17:07.585594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.140 [2024-07-26 14:17:07.585609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.140 14:17:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.140 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.408 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.408 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.408 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.408 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.408 14:17:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.408 14:17:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.408 14:17:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.408 14:17:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.621 [2024-07-26 14:17:20.182679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:00.621 [2024-07-26 14:17:20.185620] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.621 [2024-07-26 14:17:20.185675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.621 [2024-07-26 14:17:20.185702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.621 [2024-07-26 14:17:20.185729] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.621 [2024-07-26 14:17:20.185747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.621 [2024-07-26 14:17:20.185761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.621 [2024-07-26 14:17:20.185779] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.621 [2024-07-26 14:17:20.185793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.621 [2024-07-26 14:17:20.185809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.621 [2024-07-26 14:17:20.185823] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.621 [2024-07-26 14:17:20.185839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.621 [2024-07-26 14:17:20.185853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.621 14:17:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.621 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.187 14:17:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.187 14:17:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.187 14:17:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:01.187 14:17:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.187 [2024-07-26 14:17:20.782691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:01.187 [2024-07-26 14:17:20.785433] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.187 [2024-07-26 14:17:20.785492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.187 [2024-07-26 14:17:20.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.187 [2024-07-26 14:17:20.785547] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.187 [2024-07-26 14:17:20.785563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.187 [2024-07-26 14:17:20.785579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.187 [2024-07-26 14:17:20.785594] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.187 [2024-07-26 14:17:20.785610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.187 [2024-07-26 14:17:20.785625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.187 [2024-07-26 14:17:20.785642] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:01.187 [2024-07-26 14:17:20.785656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:01.187 [2024-07-26 14:17:20.785672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.753 14:17:21 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.753 14:17:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.753 14:17:21 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.753 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:02.012 14:17:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.230 [2024-07-26 14:17:33.782841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:14.230 [2024-07-26 14:17:33.786108] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.230 [2024-07-26 14:17:33.786285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.230 [2024-07-26 14:17:33.786467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.230 [2024-07-26 14:17:33.786696] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.230 [2024-07-26 14:17:33.786832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.230 [2024-07-26 14:17:33.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.230 [2024-07-26 14:17:33.787285] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.230 [2024-07-26 14:17:33.787417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.230 [2024-07-26 14:17:33.787572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.230 [2024-07-26 14:17:33.787744] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.230 [2024-07-26 14:17:33.787874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.230 [2024-07-26 14:17:33.788117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.230 14:17:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:14.230 14:17:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:14.489 [2024-07-26 14:17:34.182867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:14.489 [2024-07-26 14:17:34.185723] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.489 [2024-07-26 14:17:34.185938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.489 [2024-07-26 14:17:34.186108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.489 [2024-07-26 14:17:34.186264] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.489 [2024-07-26 14:17:34.186311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.489 [2024-07-26 14:17:34.186455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.489 [2024-07-26 14:17:34.186519] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.489 [2024-07-26 14:17:34.186626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.489 [2024-07-26 14:17:34.186692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.489 [2024-07-26 14:17:34.186814] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:14.489 [2024-07-26 14:17:34.186998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:14.489 [2024-07-26 14:17:34.187237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:14.747 14:17:34 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.747 14:17:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:14.747 14:17:34 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.747 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.005 14:17:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.64 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.64 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.64 00:12:27.210 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.64 2 00:12:27.210 remove_attach_helper took 45.64s to complete (handling 2 nvme drive(s)) 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.210 14:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:27.211 14:17:46 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:27.211 14:17:46 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.771 14:17:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.771 14:17:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.771 [2024-07-26 14:17:52.854309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:33.771 14:17:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.771 [2024-07-26 14:17:52.856296] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:52.856356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:52.856380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:52.856405] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:52.856421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:52.856434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:52.856449] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:52.856461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:52.856474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:52.856487] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:52.856500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:52.856512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:33.771 14:17:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:33.771 [2024-07-26 14:17:53.254339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:33.771 [2024-07-26 14:17:53.256733] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:53.256798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:53.256819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:53.256844] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:53.256857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:53.256870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:53.256883] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:53.256896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:53.256908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 [2024-07-26 14:17:53.256958] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:33.771 [2024-07-26 14:17:53.256971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:33.771 [2024-07-26 14:17:53.256985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:33.771 14:17:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.771 14:17:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:33.771 14:17:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:33.771 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:34.029 14:17:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.236 [2024-07-26 14:18:05.854453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.236 [2024-07-26 14:18:05.856363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.236 [2024-07-26 14:18:05.856466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.236 [2024-07-26 14:18:05.856652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.236 [2024-07-26 14:18:05.856852] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.236 [2024-07-26 14:18:05.857068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.236 [2024-07-26 14:18:05.857229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.236 [2024-07-26 14:18:05.857403] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.236 [2024-07-26 14:18:05.857594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.236 [2024-07-26 14:18:05.857731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.236 [2024-07-26 14:18:05.857794] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.236 [2024-07-26 14:18:05.857837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.236 [2024-07-26 14:18:05.858023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.236 14:18:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:46.236 14:18:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:46.804 [2024-07-26 14:18:06.354466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:46.804 [2024-07-26 14:18:06.356331] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.804 [2024-07-26 14:18:06.356560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.804 [2024-07-26 14:18:06.356710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.804 [2024-07-26 14:18:06.356945] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.804 [2024-07-26 14:18:06.357068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.804 [2024-07-26 14:18:06.357266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.804 [2024-07-26 14:18:06.357407] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.804 [2024-07-26 14:18:06.357556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.804 [2024-07-26 14:18:06.357702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.804 [2024-07-26 14:18:06.357848] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:46.804 [2024-07-26 14:18:06.357990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.804 [2024-07-26 14:18:06.358150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:46.804 14:18:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.804 14:18:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:46.804 14:18:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:46.804 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:47.062 14:18:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.270 14:18:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.270 14:18:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.270 14:18:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.270 [2024-07-26 14:18:18.854630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:59.270 [2024-07-26 14:18:18.856966] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.270 [2024-07-26 14:18:18.857144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.270 [2024-07-26 14:18:18.857326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.270 [2024-07-26 14:18:18.857517] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.270 [2024-07-26 14:18:18.857547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.270 [2024-07-26 14:18:18.857563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.270 [2024-07-26 14:18:18.857580] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.270 [2024-07-26 14:18:18.857593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.270 [2024-07-26 14:18:18.857611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.270 [2024-07-26 14:18:18.857624] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.270 [2024-07-26 14:18:18.857639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.270 [2024-07-26 14:18:18.857652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.270 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.270 14:18:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.270 14:18:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.271 14:18:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.271 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:59.271 14:18:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:59.837 [2024-07-26 14:18:19.354606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:59.837 [2024-07-26 14:18:19.356252] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.837 [2024-07-26 14:18:19.356311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.837 [2024-07-26 14:18:19.356330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.837 [2024-07-26 14:18:19.356351] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.837 [2024-07-26 14:18:19.356364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.837 [2024-07-26 14:18:19.356378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.837 [2024-07-26 14:18:19.356391] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.837 [2024-07-26 14:18:19.356404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.837 [2024-07-26 14:18:19.356416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.837 [2024-07-26 14:18:19.356430] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:59.837 [2024-07-26 14:18:19.356441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:59.837 [2024-07-26 14:18:19.356457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:59.837 14:18:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.837 14:18:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:59.837 14:18:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:59.837 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:00.095 14:18:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:12.299 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.14 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.14 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.14 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.14 2 00:13:12.300 remove_attach_helper took 45.14s to complete (handling 2 nvme drive(s)) 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:12.300 14:18:31 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 72245 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 72245 ']' 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 72245 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72245 00:13:12.300 killing process with pid 72245 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72245' 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@969 -- # kill 72245 00:13:12.300 14:18:31 sw_hotplug -- common/autotest_common.sh@974 -- # wait 72245 00:13:14.202 14:18:33 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:14.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:15.028 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:15.028 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:15.028 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:15.028 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:15.028 00:13:15.028 real 2m31.618s 00:13:15.028 user 1m52.744s 00:13:15.028 sys 0m18.810s 00:13:15.028 ************************************ 00:13:15.028 END TEST sw_hotplug 00:13:15.028 ************************************ 00:13:15.028 14:18:34 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.028 14:18:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.028 14:18:34 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:13:15.028 14:18:34 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:15.028 14:18:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:15.028 14:18:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.028 14:18:34 -- common/autotest_common.sh@10 -- # set +x 00:13:15.028 ************************************ 00:13:15.028 START TEST nvme_xnvme 00:13:15.028 ************************************ 00:13:15.028 14:18:34 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:15.286 * Looking for test storage... 00:13:15.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:15.286 14:18:34 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:15.286 14:18:34 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.286 14:18:34 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.286 14:18:34 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.286 14:18:34 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.287 14:18:34 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.287 14:18:34 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.287 14:18:34 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:15.287 14:18:34 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.287 14:18:34 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:15.287 14:18:34 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:15.287 14:18:34 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.287 14:18:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:15.287 ************************************ 00:13:15.287 START TEST xnvme_to_malloc_dd_copy 00:13:15.287 ************************************ 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:15.287 14:18:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:15.287 { 00:13:15.287 "subsystems": [ 00:13:15.287 { 00:13:15.287 "subsystem": "bdev", 00:13:15.287 "config": [ 00:13:15.287 { 00:13:15.287 "params": { 00:13:15.287 "block_size": 512, 00:13:15.287 "num_blocks": 2097152, 00:13:15.287 "name": "malloc0" 00:13:15.287 }, 00:13:15.287 "method": "bdev_malloc_create" 00:13:15.287 }, 00:13:15.287 { 00:13:15.287 "params": { 00:13:15.287 "io_mechanism": "libaio", 00:13:15.287 "filename": "/dev/nullb0", 00:13:15.287 "name": "null0" 00:13:15.287 }, 00:13:15.287 "method": "bdev_xnvme_create" 00:13:15.287 }, 00:13:15.287 { 00:13:15.287 "method": "bdev_wait_for_examine" 00:13:15.287 } 00:13:15.287 ] 00:13:15.287 } 00:13:15.287 ] 00:13:15.287 } 00:13:15.287 [2024-07-26 14:18:35.007462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:15.287 [2024-07-26 14:18:35.007859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73606 ] 00:13:15.545 [2024-07-26 14:18:35.180048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.803 [2024-07-26 14:18:35.387368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.199  Copying: 197/1024 [MB] (197 MBps) Copying: 394/1024 [MB] (197 MBps) Copying: 592/1024 [MB] (197 MBps) Copying: 787/1024 [MB] (194 MBps) Copying: 987/1024 [MB] (200 MBps) Copying: 1024/1024 [MB] (average 197 MBps) 00:13:25.199 00:13:25.199 14:18:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:25.199 14:18:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:25.199 14:18:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:25.199 14:18:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.199 { 00:13:25.199 "subsystems": [ 00:13:25.199 { 00:13:25.199 "subsystem": "bdev", 00:13:25.199 "config": [ 00:13:25.199 { 00:13:25.199 "params": { 00:13:25.199 "block_size": 512, 00:13:25.199 "num_blocks": 2097152, 00:13:25.199 "name": "malloc0" 00:13:25.199 }, 00:13:25.199 "method": "bdev_malloc_create" 00:13:25.199 }, 00:13:25.199 { 00:13:25.199 "params": { 00:13:25.199 "io_mechanism": "libaio", 00:13:25.199 "filename": "/dev/nullb0", 00:13:25.199 "name": "null0" 00:13:25.199 }, 00:13:25.199 "method": "bdev_xnvme_create" 00:13:25.199 }, 00:13:25.199 { 00:13:25.199 "method": "bdev_wait_for_examine" 00:13:25.199 } 00:13:25.199 ] 00:13:25.199 } 00:13:25.199 ] 00:13:25.199 } 00:13:25.199 [2024-07-26 14:18:44.734886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:25.199 [2024-07-26 14:18:44.735080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73720 ] 00:13:25.199 [2024-07-26 14:18:44.907233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.456 [2024-07-26 14:18:45.074998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.094  Copying: 196/1024 [MB] (196 MBps) Copying: 399/1024 [MB] (203 MBps) Copying: 595/1024 [MB] (196 MBps) Copying: 785/1024 [MB] (190 MBps) Copying: 976/1024 [MB] (190 MBps) Copying: 1024/1024 [MB] (average 194 MBps) 00:13:35.094 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:35.094 14:18:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:35.094 { 00:13:35.094 "subsystems": [ 00:13:35.094 { 00:13:35.094 "subsystem": "bdev", 00:13:35.094 "config": [ 00:13:35.094 { 00:13:35.094 "params": { 00:13:35.094 "block_size": 512, 00:13:35.094 "num_blocks": 2097152, 00:13:35.094 "name": "malloc0" 00:13:35.094 }, 00:13:35.094 "method": "bdev_malloc_create" 00:13:35.094 }, 00:13:35.094 { 00:13:35.094 "params": { 00:13:35.094 "io_mechanism": "io_uring", 00:13:35.094 "filename": "/dev/nullb0", 00:13:35.094 "name": "null0" 00:13:35.094 }, 00:13:35.094 "method": "bdev_xnvme_create" 00:13:35.094 }, 00:13:35.094 { 00:13:35.094 "method": "bdev_wait_for_examine" 00:13:35.094 } 00:13:35.094 ] 00:13:35.094 } 00:13:35.094 ] 00:13:35.094 } 00:13:35.094 [2024-07-26 14:18:54.527363] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:35.094 [2024-07-26 14:18:54.527550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73832 ] 00:13:35.094 [2024-07-26 14:18:54.682858] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.094 [2024-07-26 14:18:54.849259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.748  Copying: 201/1024 [MB] (201 MBps) Copying: 401/1024 [MB] (200 MBps) Copying: 597/1024 [MB] (195 MBps) Copying: 800/1024 [MB] (203 MBps) Copying: 1006/1024 [MB] (206 MBps) Copying: 1024/1024 [MB] (average 201 MBps) 00:13:44.748 00:13:44.748 14:19:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:44.748 14:19:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:44.748 14:19:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:44.749 14:19:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:44.749 { 00:13:44.749 "subsystems": [ 00:13:44.749 { 00:13:44.749 "subsystem": "bdev", 00:13:44.749 "config": [ 00:13:44.749 { 00:13:44.749 "params": { 00:13:44.749 "block_size": 512, 00:13:44.749 "num_blocks": 2097152, 00:13:44.749 "name": "malloc0" 00:13:44.749 }, 00:13:44.749 "method": "bdev_malloc_create" 00:13:44.749 }, 00:13:44.749 { 00:13:44.749 "params": { 00:13:44.749 "io_mechanism": "io_uring", 00:13:44.749 "filename": "/dev/nullb0", 00:13:44.749 "name": "null0" 00:13:44.749 }, 00:13:44.749 "method": "bdev_xnvme_create" 00:13:44.749 }, 00:13:44.749 { 00:13:44.749 "method": "bdev_wait_for_examine" 00:13:44.749 } 00:13:44.749 ] 00:13:44.749 } 00:13:44.749 ] 00:13:44.749 } 00:13:44.749 [2024-07-26 14:19:04.249455] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:44.749 [2024-07-26 14:19:04.249645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73941 ] 00:13:44.749 [2024-07-26 14:19:04.416321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.006 [2024-07-26 14:19:04.584036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.215  Copying: 197/1024 [MB] (197 MBps) Copying: 407/1024 [MB] (209 MBps) Copying: 611/1024 [MB] (204 MBps) Copying: 828/1024 [MB] (216 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:13:54.215 00:13:54.215 14:19:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:54.215 14:19:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:54.215 00:13:54.215 real 0m38.862s 00:13:54.215 user 0m33.992s 00:13:54.215 sys 0m4.340s 00:13:54.215 14:19:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.215 ************************************ 00:13:54.215 END TEST xnvme_to_malloc_dd_copy 00:13:54.215 ************************************ 00:13:54.215 14:19:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:54.215 14:19:13 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:54.215 14:19:13 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:54.215 14:19:13 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.215 14:19:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.215 ************************************ 00:13:54.215 START TEST xnvme_bdevperf 00:13:54.215 ************************************ 00:13:54.215 14:19:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:54.215 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:54.215 14:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.216 14:19:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.216 { 00:13:54.216 "subsystems": [ 00:13:54.216 { 00:13:54.216 "subsystem": "bdev", 00:13:54.216 "config": [ 00:13:54.216 { 00:13:54.216 "params": { 00:13:54.216 "io_mechanism": "libaio", 00:13:54.216 "filename": "/dev/nullb0", 00:13:54.216 "name": "null0" 00:13:54.216 }, 00:13:54.216 "method": "bdev_xnvme_create" 00:13:54.216 }, 00:13:54.216 { 00:13:54.216 "method": "bdev_wait_for_examine" 00:13:54.216 } 00:13:54.216 ] 00:13:54.216 } 00:13:54.216 ] 00:13:54.216 } 00:13:54.216 [2024-07-26 14:19:13.923110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:54.216 [2024-07-26 14:19:13.923611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74075 ] 00:13:54.475 [2024-07-26 14:19:14.098424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.734 [2024-07-26 14:19:14.262283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.993 Running I/O for 5 seconds... 00:14:00.264 00:14:00.264 Latency(us) 00:14:00.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.264 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:00.264 null0 : 5.00 128856.60 503.35 0.00 0.00 493.55 175.01 1370.30 00:14:00.264 =================================================================================================================== 00:14:00.264 Total : 128856.60 503.35 0.00 0.00 493.55 175.01 1370.30 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.830 14:19:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:01.086 { 00:14:01.086 "subsystems": [ 00:14:01.086 { 00:14:01.086 "subsystem": "bdev", 00:14:01.086 "config": [ 00:14:01.086 { 00:14:01.086 "params": { 00:14:01.086 "io_mechanism": "io_uring", 00:14:01.086 "filename": "/dev/nullb0", 00:14:01.086 "name": "null0" 00:14:01.086 }, 00:14:01.086 "method": "bdev_xnvme_create" 00:14:01.086 }, 00:14:01.086 { 00:14:01.086 "method": "bdev_wait_for_examine" 00:14:01.086 } 00:14:01.086 ] 00:14:01.086 } 00:14:01.086 ] 00:14:01.086 } 00:14:01.086 [2024-07-26 14:19:20.653256] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:01.086 [2024-07-26 14:19:20.653402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74150 ] 00:14:01.086 [2024-07-26 14:19:20.810838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.344 [2024-07-26 14:19:20.976120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.602 Running I/O for 5 seconds... 00:14:06.880 00:14:06.880 Latency(us) 00:14:06.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.880 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.880 null0 : 5.00 169486.98 662.06 0.00 0.00 374.62 196.42 588.33 00:14:06.880 =================================================================================================================== 00:14:06.880 Total : 169486.98 662.06 0.00 0.00 374.62 196.42 588.33 00:14:07.450 14:19:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:07.450 14:19:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:07.709 ************************************ 00:14:07.709 END TEST xnvme_bdevperf 00:14:07.709 ************************************ 00:14:07.709 00:14:07.709 real 0m13.428s 00:14:07.709 user 0m10.474s 00:14:07.709 sys 0m2.742s 00:14:07.709 14:19:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.709 14:19:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.709 ************************************ 00:14:07.709 END TEST nvme_xnvme 00:14:07.709 ************************************ 00:14:07.709 00:14:07.709 real 0m52.489s 00:14:07.709 user 0m44.542s 00:14:07.709 sys 0m7.192s 00:14:07.709 14:19:27 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.709 14:19:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.709 14:19:27 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:07.709 14:19:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:07.709 14:19:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.709 14:19:27 -- common/autotest_common.sh@10 -- # set +x 00:14:07.709 ************************************ 00:14:07.709 START TEST blockdev_xnvme 00:14:07.709 ************************************ 00:14:07.709 14:19:27 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:07.709 * Looking for test storage... 00:14:07.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:07.709 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:07.709 14:19:27 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:07.709 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:07.709 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74291 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:07.710 14:19:27 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74291 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 74291 ']' 00:14:07.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:07.710 14:19:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.969 [2024-07-26 14:19:27.541438] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:07.969 [2024-07-26 14:19:27.541640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74291 ] 00:14:07.969 [2024-07-26 14:19:27.713773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.286 [2024-07-26 14:19:27.871323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.855 14:19:28 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:08.855 14:19:28 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:14:08.855 14:19:28 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:08.855 14:19:28 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:08.855 14:19:28 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:08.855 14:19:28 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:08.855 14:19:28 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:09.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:09.373 Waiting for block devices as requested 00:14:09.373 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.632 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.632 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:09.632 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:14.930 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 nvme0n1 00:14:14.930 nvme1n1 00:14:14.930 nvme2n1 00:14:14.930 nvme2n2 00:14:14.930 nvme2n3 00:14:14.930 nvme3n1 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.930 14:19:34 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:14.930 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:14.931 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "811579f7-4503-4af8-87a9-cf2910d613a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "811579f7-4503-4af8-87a9-cf2910d613a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dc693d4b-cb29-4d41-9e3d-79200de31a9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dc693d4b-cb29-4d41-9e3d-79200de31a9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2c71a762-b4b2-40dd-85d7-fc50f2ce3f0f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c71a762-b4b2-40dd-85d7-fc50f2ce3f0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "92be8061-116a-4548-b789-033d39c830eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92be8061-116a-4548-b789-033d39c830eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b0a095e1-7585-4652-8a8b-48fb3e7fab5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0a095e1-7585-4652-8a8b-48fb3e7fab5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "24da7998-3bb2-462f-a6d5-e8531e1a8b96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "24da7998-3bb2-462f-a6d5-e8531e1a8b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:15.203 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:15.203 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:15.203 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:15.203 14:19:34 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74291 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 74291 ']' 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 74291 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74291 00:14:15.203 killing process with pid 74291 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74291' 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 74291 00:14:15.203 14:19:34 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 74291 00:14:17.110 14:19:36 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:17.110 14:19:36 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:17.110 14:19:36 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:17.110 14:19:36 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.110 14:19:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 ************************************ 00:14:17.110 START TEST bdev_hello_world 00:14:17.110 ************************************ 00:14:17.110 14:19:36 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:17.110 [2024-07-26 14:19:36.620631] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:17.110 [2024-07-26 14:19:36.620775] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74656 ] 00:14:17.110 [2024-07-26 14:19:36.777116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.369 [2024-07-26 14:19:36.928013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.628 [2024-07-26 14:19:37.262963] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:17.628 [2024-07-26 14:19:37.263024] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:17.628 [2024-07-26 14:19:37.263048] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:17.628 [2024-07-26 14:19:37.265445] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:17.628 [2024-07-26 14:19:37.265801] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:17.628 [2024-07-26 14:19:37.265829] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:17.628 [2024-07-26 14:19:37.266032] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:17.628 00:14:17.628 [2024-07-26 14:19:37.266062] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:18.567 ************************************ 00:14:18.567 END TEST bdev_hello_world 00:14:18.567 ************************************ 00:14:18.567 00:14:18.567 real 0m1.702s 00:14:18.567 user 0m1.418s 00:14:18.567 sys 0m0.171s 00:14:18.567 14:19:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.567 14:19:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:18.567 14:19:38 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:18.567 14:19:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.567 14:19:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.567 14:19:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.567 ************************************ 00:14:18.567 START TEST bdev_bounds 00:14:18.567 ************************************ 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:14:18.567 Process bdevio pid: 74687 00:14:18.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74687 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74687' 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74687 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74687 ']' 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.567 14:19:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:18.826 [2024-07-26 14:19:38.398831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:18.826 [2024-07-26 14:19:38.399042] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74687 ] 00:14:18.826 [2024-07-26 14:19:38.568887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:19.086 [2024-07-26 14:19:38.733701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.086 [2024-07-26 14:19:38.733798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.086 [2024-07-26 14:19:38.733810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.653 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.653 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:14:19.653 14:19:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:19.653 I/O targets: 00:14:19.653 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:19.653 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:19.653 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.653 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.653 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:19.653 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:19.653 00:14:19.653 00:14:19.653 CUnit - A unit testing framework for C - Version 2.1-3 00:14:19.653 http://cunit.sourceforge.net/ 00:14:19.653 00:14:19.653 00:14:19.653 Suite: bdevio tests on: nvme3n1 00:14:19.653 Test: blockdev write read block ...passed 00:14:19.653 Test: blockdev write zeroes read block ...passed 00:14:19.653 Test: blockdev write zeroes read no split ...passed 00:14:19.912 Test: blockdev write zeroes read split ...passed 00:14:19.912 Test: blockdev write zeroes read split partial ...passed 00:14:19.912 Test: blockdev reset ...passed 00:14:19.912 Test: blockdev write read 8 blocks ...passed 00:14:19.912 Test: blockdev write read size > 128k ...passed 00:14:19.912 Test: blockdev write read invalid size ...passed 00:14:19.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.912 Test: blockdev write read max offset ...passed 00:14:19.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.912 Test: blockdev writev readv 8 blocks ...passed 00:14:19.912 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.912 Test: blockdev writev readv block ...passed 00:14:19.912 Test: blockdev writev readv size > 128k ...passed 00:14:19.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.912 Test: blockdev comparev and writev ...passed 00:14:19.912 Test: blockdev nvme passthru rw ...passed 00:14:19.912 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.912 Test: blockdev nvme admin passthru ...passed 00:14:19.912 Test: blockdev copy ...passed 00:14:19.912 Suite: bdevio tests on: nvme2n3 00:14:19.912 Test: blockdev write read block ...passed 00:14:19.912 Test: blockdev write zeroes read block ...passed 00:14:19.912 Test: blockdev write zeroes read no split ...passed 00:14:19.912 Test: blockdev write zeroes read split ...passed 00:14:19.912 Test: blockdev write zeroes read split partial ...passed 00:14:19.912 Test: blockdev reset ...passed 00:14:19.912 Test: blockdev write read 8 blocks ...passed 00:14:19.912 Test: blockdev write read size > 128k ...passed 00:14:19.912 Test: blockdev write read invalid size ...passed 00:14:19.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.913 Test: blockdev write read max offset ...passed 00:14:19.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.913 Test: blockdev writev readv 8 blocks ...passed 00:14:19.913 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.913 Test: blockdev writev readv block ...passed 00:14:19.913 Test: blockdev writev readv size > 128k ...passed 00:14:19.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.913 Test: blockdev comparev and writev ...passed 00:14:19.913 Test: blockdev nvme passthru rw ...passed 00:14:19.913 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.913 Test: blockdev nvme admin passthru ...passed 00:14:19.913 Test: blockdev copy ...passed 00:14:19.913 Suite: bdevio tests on: nvme2n2 00:14:19.913 Test: blockdev write read block ...passed 00:14:19.913 Test: blockdev write zeroes read block ...passed 00:14:19.913 Test: blockdev write zeroes read no split ...passed 00:14:19.913 Test: blockdev write zeroes read split ...passed 00:14:19.913 Test: blockdev write zeroes read split partial ...passed 00:14:19.913 Test: blockdev reset ...passed 00:14:19.913 Test: blockdev write read 8 blocks ...passed 00:14:19.913 Test: blockdev write read size > 128k ...passed 00:14:19.913 Test: blockdev write read invalid size ...passed 00:14:19.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.913 Test: blockdev write read max offset ...passed 00:14:19.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.913 Test: blockdev writev readv 8 blocks ...passed 00:14:19.913 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.913 Test: blockdev writev readv block ...passed 00:14:19.913 Test: blockdev writev readv size > 128k ...passed 00:14:19.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.913 Test: blockdev comparev and writev ...passed 00:14:19.913 Test: blockdev nvme passthru rw ...passed 00:14:19.913 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.913 Test: blockdev nvme admin passthru ...passed 00:14:19.913 Test: blockdev copy ...passed 00:14:19.913 Suite: bdevio tests on: nvme2n1 00:14:19.913 Test: blockdev write read block ...passed 00:14:19.913 Test: blockdev write zeroes read block ...passed 00:14:19.913 Test: blockdev write zeroes read no split ...passed 00:14:19.913 Test: blockdev write zeroes read split ...passed 00:14:19.913 Test: blockdev write zeroes read split partial ...passed 00:14:19.913 Test: blockdev reset ...passed 00:14:19.913 Test: blockdev write read 8 blocks ...passed 00:14:19.913 Test: blockdev write read size > 128k ...passed 00:14:19.913 Test: blockdev write read invalid size ...passed 00:14:19.913 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.913 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.913 Test: blockdev write read max offset ...passed 00:14:19.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.913 Test: blockdev writev readv 8 blocks ...passed 00:14:19.913 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.913 Test: blockdev writev readv block ...passed 00:14:19.913 Test: blockdev writev readv size > 128k ...passed 00:14:19.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.913 Test: blockdev comparev and writev ...passed 00:14:19.913 Test: blockdev nvme passthru rw ...passed 00:14:19.913 Test: blockdev nvme passthru vendor specific ...passed 00:14:19.913 Test: blockdev nvme admin passthru ...passed 00:14:19.913 Test: blockdev copy ...passed 00:14:19.913 Suite: bdevio tests on: nvme1n1 00:14:19.913 Test: blockdev write read block ...passed 00:14:19.913 Test: blockdev write zeroes read block ...passed 00:14:19.913 Test: blockdev write zeroes read no split ...passed 00:14:20.172 Test: blockdev write zeroes read split ...passed 00:14:20.172 Test: blockdev write zeroes read split partial ...passed 00:14:20.172 Test: blockdev reset ...passed 00:14:20.172 Test: blockdev write read 8 blocks ...passed 00:14:20.172 Test: blockdev write read size > 128k ...passed 00:14:20.172 Test: blockdev write read invalid size ...passed 00:14:20.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:20.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:20.172 Test: blockdev write read max offset ...passed 00:14:20.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:20.172 Test: blockdev writev readv 8 blocks ...passed 00:14:20.172 Test: blockdev writev readv 30 x 1block ...passed 00:14:20.172 Test: blockdev writev readv block ...passed 00:14:20.172 Test: blockdev writev readv size > 128k ...passed 00:14:20.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:20.172 Test: blockdev comparev and writev ...passed 00:14:20.172 Test: blockdev nvme passthru rw ...passed 00:14:20.172 Test: blockdev nvme passthru vendor specific ...passed 00:14:20.172 Test: blockdev nvme admin passthru ...passed 00:14:20.172 Test: blockdev copy ...passed 00:14:20.172 Suite: bdevio tests on: nvme0n1 00:14:20.172 Test: blockdev write read block ...passed 00:14:20.172 Test: blockdev write zeroes read block ...passed 00:14:20.172 Test: blockdev write zeroes read no split ...passed 00:14:20.172 Test: blockdev write zeroes read split ...passed 00:14:20.172 Test: blockdev write zeroes read split partial ...passed 00:14:20.172 Test: blockdev reset ...passed 00:14:20.172 Test: blockdev write read 8 blocks ...passed 00:14:20.172 Test: blockdev write read size > 128k ...passed 00:14:20.172 Test: blockdev write read invalid size ...passed 00:14:20.172 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:20.172 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:20.172 Test: blockdev write read max offset ...passed 00:14:20.172 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:20.172 Test: blockdev writev readv 8 blocks ...passed 00:14:20.172 Test: blockdev writev readv 30 x 1block ...passed 00:14:20.172 Test: blockdev writev readv block ...passed 00:14:20.172 Test: blockdev writev readv size > 128k ...passed 00:14:20.172 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:20.172 Test: blockdev comparev and writev ...passed 00:14:20.172 Test: blockdev nvme passthru rw ...passed 00:14:20.172 Test: blockdev nvme passthru vendor specific ...passed 00:14:20.172 Test: blockdev nvme admin passthru ...passed 00:14:20.172 Test: blockdev copy ...passed 00:14:20.172 00:14:20.172 Run Summary: Type Total Ran Passed Failed Inactive 00:14:20.172 suites 6 6 n/a 0 0 00:14:20.172 tests 138 138 138 0 0 00:14:20.172 asserts 780 780 780 0 n/a 00:14:20.172 00:14:20.172 Elapsed time = 1.147 seconds 00:14:20.172 0 00:14:20.172 14:19:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74687 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74687 ']' 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74687 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74687 00:14:20.173 killing process with pid 74687 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74687' 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74687 00:14:20.173 14:19:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74687 00:14:21.552 ************************************ 00:14:21.552 END TEST bdev_bounds 00:14:21.552 ************************************ 00:14:21.552 14:19:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:21.552 00:14:21.552 real 0m2.605s 00:14:21.552 user 0m6.196s 00:14:21.552 sys 0m0.357s 00:14:21.553 14:19:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.553 14:19:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 14:19:40 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:21.553 14:19:40 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:21.553 14:19:40 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.553 14:19:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 ************************************ 00:14:21.553 START TEST bdev_nbd 00:14:21.553 ************************************ 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74753 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74753 /var/tmp/spdk-nbd.sock 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74753 ']' 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:21.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.553 14:19:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:21.553 [2024-07-26 14:19:41.064126] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:21.553 [2024-07-26 14:19:41.064582] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.553 [2024-07-26 14:19:41.235112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.812 [2024-07-26 14:19:41.397478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.413 14:19:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.672 1+0 records in 00:14:22.672 1+0 records out 00:14:22.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820954 s, 5.0 MB/s 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.672 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:22.937 1+0 records in 00:14:22.937 1+0 records out 00:14:22.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712108 s, 5.8 MB/s 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:22.937 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.196 1+0 records in 00:14:23.196 1+0 records out 00:14:23.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000507121 s, 8.1 MB/s 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:23.196 14:19:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.455 1+0 records in 00:14:23.455 1+0 records out 00:14:23.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054975 s, 7.5 MB/s 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:23.455 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.712 1+0 records in 00:14:23.712 1+0 records out 00:14:23.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010009 s, 4.1 MB/s 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:23.712 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.969 1+0 records in 00:14:23.969 1+0 records out 00:14:23.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898262 s, 4.6 MB/s 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:23.969 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:24.227 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd0", 00:14:24.227 "bdev_name": "nvme0n1" 00:14:24.227 }, 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd1", 00:14:24.227 "bdev_name": "nvme1n1" 00:14:24.227 }, 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd2", 00:14:24.227 "bdev_name": "nvme2n1" 00:14:24.227 }, 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd3", 00:14:24.227 "bdev_name": "nvme2n2" 00:14:24.227 }, 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd4", 00:14:24.227 "bdev_name": "nvme2n3" 00:14:24.227 }, 00:14:24.227 { 00:14:24.227 "nbd_device": "/dev/nbd5", 00:14:24.227 "bdev_name": "nvme3n1" 00:14:24.227 } 00:14:24.227 ]' 00:14:24.227 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:24.227 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd0", 00:14:24.228 "bdev_name": "nvme0n1" 00:14:24.228 }, 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd1", 00:14:24.228 "bdev_name": "nvme1n1" 00:14:24.228 }, 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd2", 00:14:24.228 "bdev_name": "nvme2n1" 00:14:24.228 }, 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd3", 00:14:24.228 "bdev_name": "nvme2n2" 00:14:24.228 }, 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd4", 00:14:24.228 "bdev_name": "nvme2n3" 00:14:24.228 }, 00:14:24.228 { 00:14:24.228 "nbd_device": "/dev/nbd5", 00:14:24.228 "bdev_name": "nvme3n1" 00:14:24.228 } 00:14:24.228 ]' 00:14:24.228 14:19:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.486 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.744 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.003 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.261 14:19:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.519 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.778 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.037 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:26.295 14:19:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:26.554 /dev/nbd0 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.554 1+0 records in 00:14:26.554 1+0 records out 00:14:26.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603824 s, 6.8 MB/s 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:26.554 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:26.813 /dev/nbd1 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:26.813 1+0 records in 00:14:26.813 1+0 records out 00:14:26.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834723 s, 4.9 MB/s 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:26.813 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:27.072 /dev/nbd10 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.072 1+0 records in 00:14:27.072 1+0 records out 00:14:27.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808339 s, 5.1 MB/s 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.072 14:19:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:27.331 /dev/nbd11 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.331 1+0 records in 00:14:27.331 1+0 records out 00:14:27.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608233 s, 6.7 MB/s 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.331 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.332 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.332 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.332 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.332 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.332 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:27.590 /dev/nbd12 00:14:27.849 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.850 1+0 records in 00:14:27.850 1+0 records out 00:14:27.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000858675 s, 4.8 MB/s 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:27.850 /dev/nbd13 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:27.850 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:27.850 1+0 records in 00:14:27.850 1+0 records out 00:14:27.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861839 s, 4.8 MB/s 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd0", 00:14:28.109 "bdev_name": "nvme0n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd1", 00:14:28.109 "bdev_name": "nvme1n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd10", 00:14:28.109 "bdev_name": "nvme2n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd11", 00:14:28.109 "bdev_name": "nvme2n2" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd12", 00:14:28.109 "bdev_name": "nvme2n3" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd13", 00:14:28.109 "bdev_name": "nvme3n1" 00:14:28.109 } 00:14:28.109 ]' 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd0", 00:14:28.109 "bdev_name": "nvme0n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd1", 00:14:28.109 "bdev_name": "nvme1n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd10", 00:14:28.109 "bdev_name": "nvme2n1" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd11", 00:14:28.109 "bdev_name": "nvme2n2" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd12", 00:14:28.109 "bdev_name": "nvme2n3" 00:14:28.109 }, 00:14:28.109 { 00:14:28.109 "nbd_device": "/dev/nbd13", 00:14:28.109 "bdev_name": "nvme3n1" 00:14:28.109 } 00:14:28.109 ]' 00:14:28.109 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:28.368 /dev/nbd1 00:14:28.368 /dev/nbd10 00:14:28.368 /dev/nbd11 00:14:28.368 /dev/nbd12 00:14:28.368 /dev/nbd13' 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:28.368 /dev/nbd1 00:14:28.368 /dev/nbd10 00:14:28.368 /dev/nbd11 00:14:28.368 /dev/nbd12 00:14:28.368 /dev/nbd13' 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:28.368 256+0 records in 00:14:28.368 256+0 records out 00:14:28.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00897162 s, 117 MB/s 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.368 14:19:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:28.368 256+0 records in 00:14:28.368 256+0 records out 00:14:28.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170207 s, 6.2 MB/s 00:14:28.368 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.368 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:28.627 256+0 records in 00:14:28.627 256+0 records out 00:14:28.627 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188768 s, 5.6 MB/s 00:14:28.627 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.627 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:28.931 256+0 records in 00:14:28.931 256+0 records out 00:14:28.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.178758 s, 5.9 MB/s 00:14:28.931 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.931 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:28.931 256+0 records in 00:14:28.931 256+0 records out 00:14:28.931 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174864 s, 6.0 MB/s 00:14:28.931 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:28.931 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:29.189 256+0 records in 00:14:29.189 256+0 records out 00:14:29.189 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172875 s, 6.1 MB/s 00:14:29.189 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:29.189 14:19:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:29.448 256+0 records in 00:14:29.448 256+0 records out 00:14:29.448 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161243 s, 6.5 MB/s 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.448 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.707 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.966 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:30.224 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:30.224 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:30.224 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:30.224 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.225 14:19:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.484 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:30.743 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:31.002 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:31.002 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:31.002 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:31.003 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.003 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.003 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:31.262 14:19:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:31.262 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:31.262 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:31.262 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:31.521 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:31.780 malloc_lvol_verify 00:14:31.780 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:31.780 93d881ec-df62-4fcc-aab0-0a4e938279d8 00:14:32.040 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:32.040 b87368da-65cd-42d4-a0e1-744e6f4ceefc 00:14:32.040 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:32.299 /dev/nbd0 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:32.299 mke2fs 1.46.5 (30-Dec-2021) 00:14:32.299 Discarding device blocks: 0/4096 done 00:14:32.299 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:32.299 00:14:32.299 Allocating group tables: 0/1 done 00:14:32.299 Writing inode tables: 0/1 done 00:14:32.299 Creating journal (1024 blocks): done 00:14:32.299 Writing superblocks and filesystem accounting information: 0/1 done 00:14:32.299 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.299 14:19:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74753 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74753 ']' 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74753 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74753 00:14:32.558 killing process with pid 74753 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74753' 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74753 00:14:32.558 14:19:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74753 00:14:33.936 14:19:53 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:33.936 00:14:33.936 real 0m12.339s 00:14:33.936 user 0m17.201s 00:14:33.936 sys 0m4.118s 00:14:33.936 14:19:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.936 14:19:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:33.936 ************************************ 00:14:33.936 END TEST bdev_nbd 00:14:33.936 ************************************ 00:14:33.936 14:19:53 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:33.936 14:19:53 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:33.936 14:19:53 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:33.936 14:19:53 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:33.936 14:19:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:33.936 14:19:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.936 14:19:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.936 ************************************ 00:14:33.936 START TEST bdev_fio 00:14:33.936 ************************************ 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:33.936 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:33.936 ************************************ 00:14:33.936 START TEST bdev_fio_rw_verify 00:14:33.936 ************************************ 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.936 14:19:53 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:33.936 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:33.936 fio-3.35 00:14:33.936 Starting 6 threads 00:14:46.155 00:14:46.155 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75172: Fri Jul 26 14:20:04 2024 00:14:46.155 read: IOPS=29.0k, BW=113MiB/s (119MB/s)(1133MiB/10001msec) 00:14:46.155 slat (usec): min=2, max=757, avg= 7.06, stdev= 4.11 00:14:46.155 clat (usec): min=88, max=5278, avg=647.75, stdev=212.11 00:14:46.155 lat (usec): min=95, max=5286, avg=654.81, stdev=212.88 00:14:46.155 clat percentiles (usec): 00:14:46.155 | 50.000th=[ 676], 99.000th=[ 1139], 99.900th=[ 1598], 99.990th=[ 3687], 00:14:46.155 | 99.999th=[ 5276] 00:14:46.155 write: IOPS=29.2k, BW=114MiB/s (120MB/s)(1143MiB/10001msec); 0 zone resets 00:14:46.155 slat (usec): min=8, max=1374, avg=24.97, stdev=22.73 00:14:46.155 clat (usec): min=83, max=10354, avg=736.40, stdev=275.89 00:14:46.155 lat (usec): min=115, max=10374, avg=761.37, stdev=276.94 00:14:46.155 clat percentiles (usec): 00:14:46.155 | 50.000th=[ 742], 99.000th=[ 1385], 99.900th=[ 4080], 99.990th=[ 6718], 00:14:46.155 | 99.999th=[ 9372] 00:14:46.155 bw ( KiB/s): min=95336, max=142352, per=100.00%, avg=117483.95, stdev=2251.44, samples=114 00:14:46.155 iops : min=23834, max=35588, avg=29370.89, stdev=562.85, samples=114 00:14:46.155 lat (usec) : 100=0.01%, 250=2.70%, 500=16.26%, 750=40.85%, 1000=34.54% 00:14:46.155 lat (msec) : 2=5.49%, 4=0.10%, 10=0.05%, 20=0.01% 00:14:46.155 cpu : usr=61.89%, sys=25.76%, ctx=7009, majf=0, minf=24644 00:14:46.155 IO depths : 1=12.0%, 2=24.4%, 4=50.5%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:46.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.155 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:46.155 issued rwts: total=290078,292515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:46.155 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:46.155 00:14:46.155 Run status group 0 (all jobs): 00:14:46.155 READ: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=1133MiB (1188MB), run=10001-10001msec 00:14:46.155 WRITE: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=1143MiB (1198MB), run=10001-10001msec 00:14:46.155 ----------------------------------------------------- 00:14:46.155 Suppressions used: 00:14:46.155 count bytes template 00:14:46.155 6 48 /usr/src/fio/parse.c 00:14:46.155 2231 214176 /usr/src/fio/iolog.c 00:14:46.155 1 8 libtcmalloc_minimal.so 00:14:46.155 1 904 libcrypto.so 00:14:46.155 ----------------------------------------------------- 00:14:46.155 00:14:46.155 00:14:46.155 real 0m12.058s 00:14:46.155 user 0m38.776s 00:14:46.155 sys 0m15.762s 00:14:46.155 14:20:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.155 14:20:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:46.155 ************************************ 00:14:46.155 END TEST bdev_fio_rw_verify 00:14:46.155 ************************************ 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "811579f7-4503-4af8-87a9-cf2910d613a8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "811579f7-4503-4af8-87a9-cf2910d613a8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "dc693d4b-cb29-4d41-9e3d-79200de31a9d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dc693d4b-cb29-4d41-9e3d-79200de31a9d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "2c71a762-b4b2-40dd-85d7-fc50f2ce3f0f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2c71a762-b4b2-40dd-85d7-fc50f2ce3f0f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "92be8061-116a-4548-b789-033d39c830eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "92be8061-116a-4548-b789-033d39c830eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "b0a095e1-7585-4652-8a8b-48fb3e7fab5b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b0a095e1-7585-4652-8a8b-48fb3e7fab5b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "24da7998-3bb2-462f-a6d5-e8531e1a8b96"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "24da7998-3bb2-462f-a6d5-e8531e1a8b96",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:46.156 /home/vagrant/spdk_repo/spdk 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:46.156 ************************************ 00:14:46.156 END TEST bdev_fio 00:14:46.156 ************************************ 00:14:46.156 00:14:46.156 real 0m12.233s 00:14:46.156 user 0m38.860s 00:14:46.156 sys 0m15.849s 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.156 14:20:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:46.156 14:20:05 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:46.156 14:20:05 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:46.156 14:20:05 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:46.156 14:20:05 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.156 14:20:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:46.156 ************************************ 00:14:46.156 START TEST bdev_verify 00:14:46.156 ************************************ 00:14:46.156 14:20:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:46.156 [2024-07-26 14:20:05.743336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:46.156 [2024-07-26 14:20:05.743474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75342 ] 00:14:46.156 [2024-07-26 14:20:05.903196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:46.415 [2024-07-26 14:20:06.069844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.415 [2024-07-26 14:20:06.069860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.980 Running I/O for 5 seconds... 00:14:52.256 00:14:52.256 Latency(us) 00:14:52.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.256 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.256 Verification LBA range: start 0x0 length 0xa0000 00:14:52.256 nvme0n1 : 5.05 1621.81 6.34 0.00 0.00 78788.64 8460.10 70540.57 00:14:52.256 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.256 Verification LBA range: start 0xa0000 length 0xa0000 00:14:52.256 nvme0n1 : 5.05 1647.21 6.43 0.00 0.00 77562.97 14239.19 60531.43 00:14:52.256 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0xbd0bd 00:14:52.257 nvme1n1 : 5.05 2823.97 11.03 0.00 0.00 45074.63 5004.57 68634.07 00:14:52.257 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:52.257 nvme1n1 : 5.06 2852.15 11.14 0.00 0.00 44549.27 5362.04 56003.49 00:14:52.257 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0x80000 00:14:52.257 nvme2n1 : 5.06 1643.91 6.42 0.00 0.00 77381.20 8579.26 74830.20 00:14:52.257 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x80000 length 0x80000 00:14:52.257 nvme2n1 : 5.05 1648.02 6.44 0.00 0.00 77122.28 13285.93 71493.82 00:14:52.257 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0x80000 00:14:52.257 nvme2n2 : 5.06 1619.92 6.33 0.00 0.00 78379.99 10724.07 62914.56 00:14:52.257 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x80000 length 0x80000 00:14:52.257 nvme2n2 : 5.05 1646.18 6.43 0.00 0.00 77058.93 12332.68 71493.82 00:14:52.257 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0x80000 00:14:52.257 nvme2n3 : 5.06 1619.46 6.33 0.00 0.00 78261.95 11021.96 64821.06 00:14:52.257 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x80000 length 0x80000 00:14:52.257 nvme2n3 : 5.07 1667.80 6.51 0.00 0.00 75920.80 8043.05 71493.82 00:14:52.257 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x0 length 0x20000 00:14:52.257 nvme3n1 : 5.06 1619.04 6.32 0.00 0.00 78148.54 6940.86 71970.44 00:14:52.257 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:52.257 Verification LBA range: start 0x20000 length 0x20000 00:14:52.257 nvme3n1 : 5.07 1666.57 6.51 0.00 0.00 75863.18 2025.66 66250.94 00:14:52.257 =================================================================================================================== 00:14:52.257 Total : 22076.04 86.23 0.00 0.00 69049.38 2025.66 74830.20 00:14:53.195 00:14:53.195 real 0m7.055s 00:14:53.195 user 0m11.029s 00:14:53.195 sys 0m1.694s 00:14:53.195 14:20:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.195 ************************************ 00:14:53.195 END TEST bdev_verify 00:14:53.195 ************************************ 00:14:53.195 14:20:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:14:53.195 14:20:12 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:53.195 14:20:12 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:14:53.195 14:20:12 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.195 14:20:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.195 ************************************ 00:14:53.195 START TEST bdev_verify_big_io 00:14:53.195 ************************************ 00:14:53.195 14:20:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:53.195 [2024-07-26 14:20:12.826874] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:53.195 [2024-07-26 14:20:12.827052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75444 ] 00:14:53.454 [2024-07-26 14:20:12.984820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.454 [2024-07-26 14:20:13.165035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.454 [2024-07-26 14:20:13.165046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.021 Running I/O for 5 seconds... 00:15:00.635 00:15:00.635 Latency(us) 00:15:00.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.635 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0xa000 00:15:00.635 nvme0n1 : 5.99 130.84 8.18 0.00 0.00 957820.80 76260.07 1197283.14 00:15:00.635 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0xa000 length 0xa000 00:15:00.635 nvme0n1 : 5.93 140.31 8.77 0.00 0.00 885006.39 144894.14 1143901.09 00:15:00.635 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0xbd0b 00:15:00.635 nvme1n1 : 5.98 171.33 10.71 0.00 0.00 712944.79 14596.65 865551.83 00:15:00.635 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:00.635 nvme1n1 : 5.95 123.64 7.73 0.00 0.00 975706.18 13285.93 1395559.33 00:15:00.635 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0x8000 00:15:00.635 nvme2n1 : 6.00 138.76 8.67 0.00 0.00 851576.73 67680.81 945624.90 00:15:00.635 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x8000 length 0x8000 00:15:00.635 nvme2n1 : 5.93 142.89 8.93 0.00 0.00 818990.91 94371.84 1021884.97 00:15:00.635 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0x8000 00:15:00.635 nvme2n2 : 5.98 93.64 5.85 0.00 0.00 1224029.49 49807.36 2394566.28 00:15:00.635 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x8000 length 0x8000 00:15:00.635 nvme2n2 : 5.96 107.45 6.72 0.00 0.00 1054898.73 186837.18 1037136.99 00:15:00.635 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0x8000 00:15:00.635 nvme2n3 : 5.98 114.99 7.19 0.00 0.00 964020.01 44564.48 2043769.95 00:15:00.635 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x8000 length 0x8000 00:15:00.635 nvme2n3 : 5.96 126.18 7.89 0.00 0.00 883471.23 15192.44 1090519.04 00:15:00.635 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x0 length 0x2000 00:15:00.635 nvme3n1 : 6.00 130.67 8.17 0.00 0.00 822495.19 9889.98 1311673.25 00:15:00.635 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:00.635 Verification LBA range: start 0x2000 length 0x2000 00:15:00.635 nvme3n1 : 5.96 134.16 8.39 0.00 0.00 803631.70 12213.53 1647217.57 00:15:00.635 =================================================================================================================== 00:15:00.635 Total : 1554.88 97.18 0.00 0.00 895948.02 9889.98 2394566.28 00:15:01.572 00:15:01.572 real 0m8.223s 00:15:01.572 user 0m14.799s 00:15:01.572 sys 0m0.541s 00:15:01.572 14:20:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.572 14:20:20 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.572 ************************************ 00:15:01.572 END TEST bdev_verify_big_io 00:15:01.572 ************************************ 00:15:01.572 14:20:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:01.572 14:20:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:01.572 14:20:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.572 14:20:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.572 ************************************ 00:15:01.572 START TEST bdev_write_zeroes 00:15:01.572 ************************************ 00:15:01.572 14:20:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:01.572 [2024-07-26 14:20:21.131349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:01.572 [2024-07-26 14:20:21.131520] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75557 ] 00:15:01.572 [2024-07-26 14:20:21.299065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.831 [2024-07-26 14:20:21.473088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.089 Running I/O for 1 seconds... 00:15:03.468 00:15:03.468 Latency(us) 00:15:03.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.468 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme0n1 : 1.01 12565.10 49.08 0.00 0.00 10176.66 7000.44 18230.92 00:15:03.468 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme1n1 : 1.01 18227.30 71.20 0.00 0.00 7007.77 4259.84 14596.65 00:15:03.468 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme2n1 : 1.01 12495.00 48.81 0.00 0.00 10164.32 6732.33 15371.17 00:15:03.468 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme2n2 : 1.02 12545.70 49.01 0.00 0.00 10113.80 4468.36 15132.86 00:15:03.468 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme2n3 : 1.02 12530.48 48.95 0.00 0.00 10116.60 4915.20 15609.48 00:15:03.468 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:03.468 nvme3n1 : 1.02 12514.76 48.89 0.00 0.00 10121.49 5272.67 16681.89 00:15:03.468 =================================================================================================================== 00:15:03.468 Total : 80878.34 315.93 0.00 0.00 9435.29 4259.84 18230.92 00:15:04.401 ************************************ 00:15:04.402 END TEST bdev_write_zeroes 00:15:04.402 ************************************ 00:15:04.402 00:15:04.402 real 0m3.046s 00:15:04.402 user 0m2.299s 00:15:04.402 sys 0m0.548s 00:15:04.402 14:20:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.402 14:20:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:04.402 14:20:24 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:04.402 14:20:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:04.402 14:20:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.402 14:20:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:04.402 ************************************ 00:15:04.402 START TEST bdev_json_nonenclosed 00:15:04.402 ************************************ 00:15:04.402 14:20:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:04.661 [2024-07-26 14:20:24.214844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:04.661 [2024-07-26 14:20:24.215042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75610 ] 00:15:04.661 [2024-07-26 14:20:24.371934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.920 [2024-07-26 14:20:24.539693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.920 [2024-07-26 14:20:24.539827] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:04.920 [2024-07-26 14:20:24.539855] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:04.920 [2024-07-26 14:20:24.539870] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:05.180 ************************************ 00:15:05.180 END TEST bdev_json_nonenclosed 00:15:05.180 ************************************ 00:15:05.180 00:15:05.180 real 0m0.785s 00:15:05.180 user 0m0.567s 00:15:05.180 sys 0m0.114s 00:15:05.180 14:20:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:05.180 14:20:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:05.440 14:20:24 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:05.440 14:20:24 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:05.440 14:20:24 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:05.440 14:20:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.440 ************************************ 00:15:05.440 START TEST bdev_json_nonarray 00:15:05.440 ************************************ 00:15:05.440 14:20:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:05.440 [2024-07-26 14:20:25.075115] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:05.440 [2024-07-26 14:20:25.075594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75641 ] 00:15:05.699 [2024-07-26 14:20:25.247473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.699 [2024-07-26 14:20:25.411797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.699 [2024-07-26 14:20:25.411952] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:05.699 [2024-07-26 14:20:25.411984] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:05.699 [2024-07-26 14:20:25.412001] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:06.267 ************************************ 00:15:06.267 END TEST bdev_json_nonarray 00:15:06.267 ************************************ 00:15:06.267 00:15:06.267 real 0m0.809s 00:15:06.267 user 0m0.568s 00:15:06.267 sys 0m0.135s 00:15:06.267 14:20:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.267 14:20:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:06.267 14:20:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:06.835 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:07.403 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.403 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.403 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.662 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:07.662 00:15:07.662 real 0m59.981s 00:15:07.662 user 1m43.182s 00:15:07.662 sys 0m26.359s 00:15:07.662 14:20:27 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.662 14:20:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.662 ************************************ 00:15:07.662 END TEST blockdev_xnvme 00:15:07.662 ************************************ 00:15:07.662 14:20:27 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:07.662 14:20:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.662 14:20:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.662 14:20:27 -- common/autotest_common.sh@10 -- # set +x 00:15:07.662 ************************************ 00:15:07.662 START TEST ublk 00:15:07.662 ************************************ 00:15:07.662 14:20:27 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:07.921 * Looking for test storage... 00:15:07.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:07.921 14:20:27 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:07.921 14:20:27 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:07.921 14:20:27 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:07.921 14:20:27 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:07.921 14:20:27 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:07.921 14:20:27 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:07.921 14:20:27 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:07.921 14:20:27 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:07.921 14:20:27 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:07.921 14:20:27 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:07.921 14:20:27 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:07.921 14:20:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:07.921 ************************************ 00:15:07.921 START TEST test_save_ublk_config 00:15:07.921 ************************************ 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75924 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75924 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 75924 ']' 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.921 14:20:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:07.921 [2024-07-26 14:20:27.580241] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:07.921 [2024-07-26 14:20:27.580412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75924 ] 00:15:08.181 [2024-07-26 14:20:27.754685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.440 [2024-07-26 14:20:28.058024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.008 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:09.008 [2024-07-26 14:20:28.717995] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:09.008 [2024-07-26 14:20:28.719109] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:09.267 malloc0 00:15:09.267 [2024-07-26 14:20:28.782062] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:09.267 [2024-07-26 14:20:28.782157] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:09.267 [2024-07-26 14:20:28.782184] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:09.267 [2024-07-26 14:20:28.782201] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:09.267 [2024-07-26 14:20:28.791047] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:09.267 [2024-07-26 14:20:28.791082] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:09.267 [2024-07-26 14:20:28.797967] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:09.267 [2024-07-26 14:20:28.798149] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:09.267 [2024-07-26 14:20:28.815023] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:09.267 0 00:15:09.267 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.267 14:20:28 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:09.267 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.267 14:20:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:09.267 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.267 14:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:09.267 "subsystems": [ 00:15:09.267 { 00:15:09.267 "subsystem": "keyring", 00:15:09.267 "config": [] 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "subsystem": "iobuf", 00:15:09.267 "config": [ 00:15:09.267 { 00:15:09.267 "method": "iobuf_set_options", 00:15:09.267 "params": { 00:15:09.267 "small_pool_count": 8192, 00:15:09.267 "large_pool_count": 1024, 00:15:09.267 "small_bufsize": 8192, 00:15:09.267 "large_bufsize": 135168 00:15:09.267 } 00:15:09.267 } 00:15:09.267 ] 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "subsystem": "sock", 00:15:09.267 "config": [ 00:15:09.267 { 00:15:09.267 "method": "sock_set_default_impl", 00:15:09.267 "params": { 00:15:09.267 "impl_name": "posix" 00:15:09.267 } 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "method": "sock_impl_set_options", 00:15:09.267 "params": { 00:15:09.267 "impl_name": "ssl", 00:15:09.267 "recv_buf_size": 4096, 00:15:09.267 "send_buf_size": 4096, 00:15:09.267 "enable_recv_pipe": true, 00:15:09.267 "enable_quickack": false, 00:15:09.267 "enable_placement_id": 0, 00:15:09.267 "enable_zerocopy_send_server": true, 00:15:09.267 "enable_zerocopy_send_client": false, 00:15:09.267 "zerocopy_threshold": 0, 00:15:09.267 "tls_version": 0, 00:15:09.267 "enable_ktls": false 00:15:09.267 } 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "method": "sock_impl_set_options", 00:15:09.267 "params": { 00:15:09.267 "impl_name": "posix", 00:15:09.267 "recv_buf_size": 2097152, 00:15:09.267 "send_buf_size": 2097152, 00:15:09.267 "enable_recv_pipe": true, 00:15:09.267 "enable_quickack": false, 00:15:09.267 "enable_placement_id": 0, 00:15:09.267 "enable_zerocopy_send_server": true, 00:15:09.267 "enable_zerocopy_send_client": false, 00:15:09.267 "zerocopy_threshold": 0, 00:15:09.267 "tls_version": 0, 00:15:09.267 "enable_ktls": false 00:15:09.267 } 00:15:09.267 } 00:15:09.267 ] 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "subsystem": "vmd", 00:15:09.267 "config": [] 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "subsystem": "accel", 00:15:09.267 "config": [ 00:15:09.267 { 00:15:09.267 "method": "accel_set_options", 00:15:09.267 "params": { 00:15:09.267 "small_cache_size": 128, 00:15:09.267 "large_cache_size": 16, 00:15:09.267 "task_count": 2048, 00:15:09.267 "sequence_count": 2048, 00:15:09.267 "buf_count": 2048 00:15:09.267 } 00:15:09.267 } 00:15:09.267 ] 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "subsystem": "bdev", 00:15:09.267 "config": [ 00:15:09.267 { 00:15:09.267 "method": "bdev_set_options", 00:15:09.267 "params": { 00:15:09.267 "bdev_io_pool_size": 65535, 00:15:09.267 "bdev_io_cache_size": 256, 00:15:09.267 "bdev_auto_examine": true, 00:15:09.267 "iobuf_small_cache_size": 128, 00:15:09.267 "iobuf_large_cache_size": 16 00:15:09.267 } 00:15:09.267 }, 00:15:09.267 { 00:15:09.267 "method": "bdev_raid_set_options", 00:15:09.267 "params": { 00:15:09.268 "process_window_size_kb": 1024, 00:15:09.268 "process_max_bandwidth_mb_sec": 0 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "bdev_iscsi_set_options", 00:15:09.268 "params": { 00:15:09.268 "timeout_sec": 30 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "bdev_nvme_set_options", 00:15:09.268 "params": { 00:15:09.268 "action_on_timeout": "none", 00:15:09.268 "timeout_us": 0, 00:15:09.268 "timeout_admin_us": 0, 00:15:09.268 "keep_alive_timeout_ms": 10000, 00:15:09.268 "arbitration_burst": 0, 00:15:09.268 "low_priority_weight": 0, 00:15:09.268 "medium_priority_weight": 0, 00:15:09.268 "high_priority_weight": 0, 00:15:09.268 "nvme_adminq_poll_period_us": 10000, 00:15:09.268 "nvme_ioq_poll_period_us": 0, 00:15:09.268 "io_queue_requests": 0, 00:15:09.268 "delay_cmd_submit": true, 00:15:09.268 "transport_retry_count": 4, 00:15:09.268 "bdev_retry_count": 3, 00:15:09.268 "transport_ack_timeout": 0, 00:15:09.268 "ctrlr_loss_timeout_sec": 0, 00:15:09.268 "reconnect_delay_sec": 0, 00:15:09.268 "fast_io_fail_timeout_sec": 0, 00:15:09.268 "disable_auto_failback": false, 00:15:09.268 "generate_uuids": false, 00:15:09.268 "transport_tos": 0, 00:15:09.268 "nvme_error_stat": false, 00:15:09.268 "rdma_srq_size": 0, 00:15:09.268 "io_path_stat": false, 00:15:09.268 "allow_accel_sequence": false, 00:15:09.268 "rdma_max_cq_size": 0, 00:15:09.268 "rdma_cm_event_timeout_ms": 0, 00:15:09.268 "dhchap_digests": [ 00:15:09.268 "sha256", 00:15:09.268 "sha384", 00:15:09.268 "sha512" 00:15:09.268 ], 00:15:09.268 "dhchap_dhgroups": [ 00:15:09.268 "null", 00:15:09.268 "ffdhe2048", 00:15:09.268 "ffdhe3072", 00:15:09.268 "ffdhe4096", 00:15:09.268 "ffdhe6144", 00:15:09.268 "ffdhe8192" 00:15:09.268 ] 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "bdev_nvme_set_hotplug", 00:15:09.268 "params": { 00:15:09.268 "period_us": 100000, 00:15:09.268 "enable": false 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "bdev_malloc_create", 00:15:09.268 "params": { 00:15:09.268 "name": "malloc0", 00:15:09.268 "num_blocks": 8192, 00:15:09.268 "block_size": 4096, 00:15:09.268 "physical_block_size": 4096, 00:15:09.268 "uuid": "fd06a45c-11e6-478a-af2e-93bb3ba3638c", 00:15:09.268 "optimal_io_boundary": 0, 00:15:09.268 "md_size": 0, 00:15:09.268 "dif_type": 0, 00:15:09.268 "dif_is_head_of_md": false, 00:15:09.268 "dif_pi_format": 0 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "bdev_wait_for_examine" 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "scsi", 00:15:09.268 "config": null 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "scheduler", 00:15:09.268 "config": [ 00:15:09.268 { 00:15:09.268 "method": "framework_set_scheduler", 00:15:09.268 "params": { 00:15:09.268 "name": "static" 00:15:09.268 } 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "vhost_scsi", 00:15:09.268 "config": [] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "vhost_blk", 00:15:09.268 "config": [] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "ublk", 00:15:09.268 "config": [ 00:15:09.268 { 00:15:09.268 "method": "ublk_create_target", 00:15:09.268 "params": { 00:15:09.268 "cpumask": "1" 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "ublk_start_disk", 00:15:09.268 "params": { 00:15:09.268 "bdev_name": "malloc0", 00:15:09.268 "ublk_id": 0, 00:15:09.268 "num_queues": 1, 00:15:09.268 "queue_depth": 128 00:15:09.268 } 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "nbd", 00:15:09.268 "config": [] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "nvmf", 00:15:09.268 "config": [ 00:15:09.268 { 00:15:09.268 "method": "nvmf_set_config", 00:15:09.268 "params": { 00:15:09.268 "discovery_filter": "match_any", 00:15:09.268 "admin_cmd_passthru": { 00:15:09.268 "identify_ctrlr": false 00:15:09.268 } 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "nvmf_set_max_subsystems", 00:15:09.268 "params": { 00:15:09.268 "max_subsystems": 1024 00:15:09.268 } 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "method": "nvmf_set_crdt", 00:15:09.268 "params": { 00:15:09.268 "crdt1": 0, 00:15:09.268 "crdt2": 0, 00:15:09.268 "crdt3": 0 00:15:09.268 } 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 }, 00:15:09.268 { 00:15:09.268 "subsystem": "iscsi", 00:15:09.268 "config": [ 00:15:09.268 { 00:15:09.268 "method": "iscsi_set_options", 00:15:09.268 "params": { 00:15:09.268 "node_base": "iqn.2016-06.io.spdk", 00:15:09.268 "max_sessions": 128, 00:15:09.268 "max_connections_per_session": 2, 00:15:09.268 "max_queue_depth": 64, 00:15:09.268 "default_time2wait": 2, 00:15:09.268 "default_time2retain": 20, 00:15:09.268 "first_burst_length": 8192, 00:15:09.268 "immediate_data": true, 00:15:09.268 "allow_duplicated_isid": false, 00:15:09.268 "error_recovery_level": 0, 00:15:09.268 "nop_timeout": 60, 00:15:09.268 "nop_in_interval": 30, 00:15:09.268 "disable_chap": false, 00:15:09.268 "require_chap": false, 00:15:09.268 "mutual_chap": false, 00:15:09.268 "chap_group": 0, 00:15:09.268 "max_large_datain_per_connection": 64, 00:15:09.268 "max_r2t_per_connection": 4, 00:15:09.268 "pdu_pool_size": 36864, 00:15:09.268 "immediate_data_pool_size": 16384, 00:15:09.268 "data_out_pool_size": 2048 00:15:09.268 } 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 } 00:15:09.268 ] 00:15:09.268 }' 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75924 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 75924 ']' 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 75924 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:09.268 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75924 00:15:09.528 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:09.528 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:09.528 killing process with pid 75924 00:15:09.528 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75924' 00:15:09.528 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 75924 00:15:09.528 14:20:29 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 75924 00:15:10.904 [2024-07-26 14:20:30.228179] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:10.904 [2024-07-26 14:20:30.263999] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:10.904 [2024-07-26 14:20:30.264178] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:10.904 [2024-07-26 14:20:30.272032] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:10.904 [2024-07-26 14:20:30.272116] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:10.904 [2024-07-26 14:20:30.272129] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:10.904 [2024-07-26 14:20:30.272180] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:10.904 [2024-07-26 14:20:30.272397] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75973 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75973 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 75973 ']' 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.840 14:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:11.841 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.841 14:20:31 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:11.841 "subsystems": [ 00:15:11.841 { 00:15:11.841 "subsystem": "keyring", 00:15:11.841 "config": [] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "iobuf", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "iobuf_set_options", 00:15:11.841 "params": { 00:15:11.841 "small_pool_count": 8192, 00:15:11.841 "large_pool_count": 1024, 00:15:11.841 "small_bufsize": 8192, 00:15:11.841 "large_bufsize": 135168 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "sock", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "sock_set_default_impl", 00:15:11.841 "params": { 00:15:11.841 "impl_name": "posix" 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "sock_impl_set_options", 00:15:11.841 "params": { 00:15:11.841 "impl_name": "ssl", 00:15:11.841 "recv_buf_size": 4096, 00:15:11.841 "send_buf_size": 4096, 00:15:11.841 "enable_recv_pipe": true, 00:15:11.841 "enable_quickack": false, 00:15:11.841 "enable_placement_id": 0, 00:15:11.841 "enable_zerocopy_send_server": true, 00:15:11.841 "enable_zerocopy_send_client": false, 00:15:11.841 "zerocopy_threshold": 0, 00:15:11.841 "tls_version": 0, 00:15:11.841 "enable_ktls": false 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "sock_impl_set_options", 00:15:11.841 "params": { 00:15:11.841 "impl_name": "posix", 00:15:11.841 "recv_buf_size": 2097152, 00:15:11.841 "send_buf_size": 2097152, 00:15:11.841 "enable_recv_pipe": true, 00:15:11.841 "enable_quickack": false, 00:15:11.841 "enable_placement_id": 0, 00:15:11.841 "enable_zerocopy_send_server": true, 00:15:11.841 "enable_zerocopy_send_client": false, 00:15:11.841 "zerocopy_threshold": 0, 00:15:11.841 "tls_version": 0, 00:15:11.841 "enable_ktls": false 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "vmd", 00:15:11.841 "config": [] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "accel", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "accel_set_options", 00:15:11.841 "params": { 00:15:11.841 "small_cache_size": 128, 00:15:11.841 "large_cache_size": 16, 00:15:11.841 "task_count": 2048, 00:15:11.841 "sequence_count": 2048, 00:15:11.841 "buf_count": 2048 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "bdev", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "bdev_set_options", 00:15:11.841 "params": { 00:15:11.841 "bdev_io_pool_size": 65535, 00:15:11.841 "bdev_io_cache_size": 256, 00:15:11.841 "bdev_auto_examine": true, 00:15:11.841 "iobuf_small_cache_size": 128, 00:15:11.841 "iobuf_large_cache_size": 16 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_raid_set_options", 00:15:11.841 "params": { 00:15:11.841 "process_window_size_kb": 1024, 00:15:11.841 "process_max_bandwidth_mb_sec": 0 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_iscsi_set_options", 00:15:11.841 "params": { 00:15:11.841 "timeout_sec": 30 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_nvme_set_options", 00:15:11.841 "params": { 00:15:11.841 "action_on_timeout": "none", 00:15:11.841 "timeout_us": 0, 00:15:11.841 "timeout_admin_us": 0, 00:15:11.841 "keep_alive_timeout_ms": 10000, 00:15:11.841 "arbitration_burst": 0, 00:15:11.841 "low_priority_weight": 0, 00:15:11.841 "medium_priority_weight": 0, 00:15:11.841 "high_priority_weight": 0, 00:15:11.841 "nvme_adminq_poll_period_us": 10000, 00:15:11.841 "nvme_ioq_poll_period_us": 0, 00:15:11.841 "io_queue_requests": 0, 00:15:11.841 "delay_cmd_submit": true, 00:15:11.841 "transport_retry_count": 4, 00:15:11.841 "bdev_retry_count": 3, 00:15:11.841 "transport_ack_timeout": 0, 00:15:11.841 "ctrlr_loss_timeout_sec": 0, 00:15:11.841 "reconnect_delay_sec": 0, 00:15:11.841 "fast_io_fail_timeout_sec": 0, 00:15:11.841 "disable_auto_failback": false, 00:15:11.841 "generate_uuids": false, 00:15:11.841 "transport_tos": 0, 00:15:11.841 "nvme_error_stat": false, 00:15:11.841 "rdma_srq_size": 0, 00:15:11.841 "io_path_stat": false, 00:15:11.841 "allow_accel_sequence": false, 00:15:11.841 "rdma_max_cq_size": 0, 00:15:11.841 "rdma_cm_event_timeout_ms": 0, 00:15:11.841 "dhchap_digests": [ 00:15:11.841 "sha256", 00:15:11.841 "sha384", 00:15:11.841 "sha512" 00:15:11.841 ], 00:15:11.841 "dhchap_dhgroups": [ 00:15:11.841 "null", 00:15:11.841 "ffdhe2048", 00:15:11.841 "ffdhe3072", 00:15:11.841 "ffdhe4096", 00:15:11.841 "ffdhe6144", 00:15:11.841 "ffdhe8192" 00:15:11.841 ] 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_nvme_set_hotplug", 00:15:11.841 "params": { 00:15:11.841 "period_us": 100000, 00:15:11.841 "enable": false 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_malloc_create", 00:15:11.841 "params": { 00:15:11.841 "name": "malloc0", 00:15:11.841 "num_blocks": 8192, 00:15:11.841 "block_size": 4096, 00:15:11.841 "physical_block_size": 4096, 00:15:11.841 "uuid": "fd06a45c-11e6-478a-af2e-93bb3ba3638c", 00:15:11.841 "optimal_io_boundary": 0, 00:15:11.841 "md_size": 0, 00:15:11.841 "dif_type": 0, 00:15:11.841 "dif_is_head_of_md": false, 00:15:11.841 "dif_pi_format": 0 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "bdev_wait_for_examine" 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "scsi", 00:15:11.841 "config": null 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "scheduler", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "framework_set_scheduler", 00:15:11.841 "params": { 00:15:11.841 "name": "static" 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "vhost_scsi", 00:15:11.841 "config": [] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "vhost_blk", 00:15:11.841 "config": [] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "ublk", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "ublk_create_target", 00:15:11.841 "params": { 00:15:11.841 "cpumask": "1" 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "ublk_start_disk", 00:15:11.841 "params": { 00:15:11.841 "bdev_name": "malloc0", 00:15:11.841 "ublk_id": 0, 00:15:11.841 "num_queues": 1, 00:15:11.841 "queue_depth": 128 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "nbd", 00:15:11.841 "config": [] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "nvmf", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "nvmf_set_config", 00:15:11.841 "params": { 00:15:11.841 "discovery_filter": "match_any", 00:15:11.841 "admin_cmd_passthru": { 00:15:11.841 "identify_ctrlr": false 00:15:11.841 } 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "nvmf_set_max_subsystems", 00:15:11.841 "params": { 00:15:11.841 "max_subsystems": 1024 00:15:11.841 } 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "method": "nvmf_set_crdt", 00:15:11.841 "params": { 00:15:11.841 "crdt1": 0, 00:15:11.841 "crdt2": 0, 00:15:11.841 "crdt3": 0 00:15:11.841 } 00:15:11.841 } 00:15:11.841 ] 00:15:11.841 }, 00:15:11.841 { 00:15:11.841 "subsystem": "iscsi", 00:15:11.841 "config": [ 00:15:11.841 { 00:15:11.841 "method": "iscsi_set_options", 00:15:11.841 "params": { 00:15:11.841 "node_base": "iqn.2016-06.io.spdk", 00:15:11.841 "max_sessions": 128, 00:15:11.841 "max_connections_per_session": 2, 00:15:11.841 "max_queue_depth": 64, 00:15:11.841 "default_time2wait": 2, 00:15:11.841 "default_time2retain": 20, 00:15:11.841 "first_burst_length": 8192, 00:15:11.841 "immediate_data": true, 00:15:11.842 "allow_duplicated_isid": false, 00:15:11.842 "error_recovery_level": 0, 00:15:11.842 "nop_timeout": 60, 00:15:11.842 "nop_in_interval": 30, 00:15:11.842 "disable_chap": false, 00:15:11.842 "require_chap": false, 00:15:11.842 "mutual_chap": false, 00:15:11.842 "chap_group": 0, 00:15:11.842 "max_large_datain_per_connection": 64, 00:15:11.842 "max_r2t_per_connection": 4, 00:15:11.842 "pdu_pool_size": 36864, 00:15:11.842 "immediate_data_pool_size": 16384, 00:15:11.842 "data_out_pool_size": 2048 00:15:11.842 } 00:15:11.842 } 00:15:11.842 ] 00:15:11.842 } 00:15:11.842 ] 00:15:11.842 }' 00:15:11.842 14:20:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:11.842 [2024-07-26 14:20:31.482806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:11.842 [2024-07-26 14:20:31.483021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75973 ] 00:15:12.100 [2024-07-26 14:20:31.656009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.100 [2024-07-26 14:20:31.840466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.034 [2024-07-26 14:20:32.616917] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:13.034 [2024-07-26 14:20:32.618015] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:13.034 [2024-07-26 14:20:32.623218] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:13.034 [2024-07-26 14:20:32.623321] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:13.034 [2024-07-26 14:20:32.623337] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:13.034 [2024-07-26 14:20:32.623347] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:13.034 [2024-07-26 14:20:32.631074] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:13.034 [2024-07-26 14:20:32.631101] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:13.034 [2024-07-26 14:20:32.641058] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:13.034 [2024-07-26 14:20:32.641206] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:13.034 [2024-07-26 14:20:32.665016] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75973 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 75973 ']' 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 75973 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75973 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.034 killing process with pid 75973 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75973' 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 75973 00:15:13.034 14:20:32 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 75973 00:15:14.415 [2024-07-26 14:20:34.026051] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:14.415 [2024-07-26 14:20:34.066040] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:14.415 [2024-07-26 14:20:34.066281] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:14.415 [2024-07-26 14:20:34.075008] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:14.415 [2024-07-26 14:20:34.075079] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:14.415 [2024-07-26 14:20:34.075093] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:14.416 [2024-07-26 14:20:34.075127] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:14.416 [2024-07-26 14:20:34.075344] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:15.788 14:20:35 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:15.788 00:15:15.788 real 0m7.723s 00:15:15.788 user 0m6.673s 00:15:15.788 sys 0m1.916s 00:15:15.788 14:20:35 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.788 14:20:35 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:15.788 ************************************ 00:15:15.788 END TEST test_save_ublk_config 00:15:15.788 ************************************ 00:15:15.788 14:20:35 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76047 00:15:15.788 14:20:35 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:15.788 14:20:35 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:15.788 14:20:35 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76047 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@831 -- # '[' -z 76047 ']' 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.788 14:20:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:15.788 [2024-07-26 14:20:35.346435] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:15.788 [2024-07-26 14:20:35.346608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76047 ] 00:15:15.788 [2024-07-26 14:20:35.518944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.047 [2024-07-26 14:20:35.674495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.047 [2024-07-26 14:20:35.674507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.613 14:20:36 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.613 14:20:36 ublk -- common/autotest_common.sh@864 -- # return 0 00:15:16.613 14:20:36 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:16.613 14:20:36 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:16.613 14:20:36 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.614 14:20:36 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.614 ************************************ 00:15:16.614 START TEST test_create_ublk 00:15:16.614 ************************************ 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:15:16.614 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.614 [2024-07-26 14:20:36.332994] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:16.614 [2024-07-26 14:20:36.335533] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.614 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:16.614 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.614 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.872 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.872 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:16.872 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:16.872 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.872 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:16.872 [2024-07-26 14:20:36.580194] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:16.872 [2024-07-26 14:20:36.580739] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:16.872 [2024-07-26 14:20:36.580761] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:16.872 [2024-07-26 14:20:36.580775] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:16.872 [2024-07-26 14:20:36.588015] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:16.872 [2024-07-26 14:20:36.588046] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:16.872 [2024-07-26 14:20:36.598941] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:16.872 [2024-07-26 14:20:36.611148] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:16.872 [2024-07-26 14:20:36.633057] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.130 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:17.130 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.130 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:17.130 14:20:36 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:17.130 { 00:15:17.130 "ublk_device": "/dev/ublkb0", 00:15:17.130 "id": 0, 00:15:17.130 "queue_depth": 512, 00:15:17.130 "num_queues": 4, 00:15:17.130 "bdev_name": "Malloc0" 00:15:17.130 } 00:15:17.130 ]' 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:17.130 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:17.388 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:17.388 14:20:36 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:17.388 14:20:36 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:17.388 fio: verification read phase will never start because write phase uses all of runtime 00:15:17.388 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:17.388 fio-3.35 00:15:17.388 Starting 1 process 00:15:29.588 00:15:29.588 fio_test: (groupid=0, jobs=1): err= 0: pid=76097: Fri Jul 26 14:20:47 2024 00:15:29.588 write: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec); 0 zone resets 00:15:29.588 clat (usec): min=49, max=7978, avg=81.07, stdev=156.38 00:15:29.588 lat (usec): min=50, max=7983, avg=81.74, stdev=156.41 00:15:29.588 clat percentiles (usec): 00:15:29.588 | 1.00th=[ 62], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 66], 00:15:29.588 | 30.00th=[ 67], 40.00th=[ 68], 50.00th=[ 69], 60.00th=[ 70], 00:15:29.588 | 70.00th=[ 73], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 98], 00:15:29.588 | 99.00th=[ 122], 99.50th=[ 143], 99.90th=[ 3097], 99.95th=[ 3556], 00:15:29.588 | 99.99th=[ 4015] 00:15:29.588 bw ( KiB/s): min=18120, max=52616, per=100.00%, avg=48613.37, stdev=7494.95, samples=19 00:15:29.588 iops : min= 4530, max=13154, avg=12153.32, stdev=1873.73, samples=19 00:15:29.588 lat (usec) : 50=0.01%, 100=95.66%, 250=3.91%, 500=0.03%, 750=0.02% 00:15:29.588 lat (usec) : 1000=0.03% 00:15:29.588 lat (msec) : 2=0.12%, 4=0.21%, 10=0.01% 00:15:29.588 cpu : usr=2.38%, sys=7.14%, ctx=121452, majf=0, minf=796 00:15:29.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:29.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.588 issued rwts: total=0,121448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:29.588 00:15:29.588 Run status group 0 (all jobs): 00:15:29.588 WRITE: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:15:29.588 00:15:29.588 Disk stats (read/write): 00:15:29.588 ublkb0: ios=0/120219, merge=0/0, ticks=0/8975, in_queue=8976, util=98.97% 00:15:29.588 14:20:47 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.588 [2024-07-26 14:20:47.161835] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:29.588 [2024-07-26 14:20:47.202388] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:29.588 [2024-07-26 14:20:47.203908] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:29.588 [2024-07-26 14:20:47.210008] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:29.588 [2024-07-26 14:20:47.210385] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:29.588 [2024-07-26 14:20:47.210406] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.588 14:20:47 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.588 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:47.234074] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:29.589 request: 00:15:29.589 { 00:15:29.589 "ublk_id": 0, 00:15:29.589 "method": "ublk_stop_disk", 00:15:29.589 "req_id": 1 00:15:29.589 } 00:15:29.589 Got JSON-RPC error response 00:15:29.589 response: 00:15:29.589 { 00:15:29.589 "code": -19, 00:15:29.589 "message": "No such device" 00:15:29.589 } 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:29.589 14:20:47 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:47.241043] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:29.589 [2024-07-26 14:20:47.248974] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:29.589 [2024-07-26 14:20:47.249042] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:29.589 ************************************ 00:15:29.589 END TEST test_create_ublk 00:15:29.589 ************************************ 00:15:29.589 14:20:47 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:29.589 00:15:29.589 real 0m11.313s 00:15:29.589 user 0m0.665s 00:15:29.589 sys 0m0.800s 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:47 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:29.589 14:20:47 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:29.589 14:20:47 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:29.589 14:20:47 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 ************************************ 00:15:29.589 START TEST test_create_multi_ublk 00:15:29.589 ************************************ 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:47.696999] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:29.589 [2024-07-26 14:20:47.699226] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:47.917218] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:29.589 [2024-07-26 14:20:47.917734] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:29.589 [2024-07-26 14:20:47.917758] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:29.589 [2024-07-26 14:20:47.917768] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.589 [2024-07-26 14:20:47.926292] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.589 [2024-07-26 14:20:47.926321] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.589 [2024-07-26 14:20:47.932030] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.589 [2024-07-26 14:20:47.932790] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:29.589 [2024-07-26 14:20:47.948069] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:48.198149] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:29.589 [2024-07-26 14:20:48.198700] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:29.589 [2024-07-26 14:20:48.198735] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:29.589 [2024-07-26 14:20:48.198762] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.589 [2024-07-26 14:20:48.204959] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.589 [2024-07-26 14:20:48.204991] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.589 [2024-07-26 14:20:48.212028] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.589 [2024-07-26 14:20:48.212809] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:29.589 [2024-07-26 14:20:48.235944] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.589 [2024-07-26 14:20:48.470190] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:29.589 [2024-07-26 14:20:48.470724] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:29.589 [2024-07-26 14:20:48.470750] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:29.589 [2024-07-26 14:20:48.470760] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.589 [2024-07-26 14:20:48.477959] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.589 [2024-07-26 14:20:48.477990] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.589 [2024-07-26 14:20:48.484035] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.589 [2024-07-26 14:20:48.484839] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:29.589 [2024-07-26 14:20:48.492479] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.589 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.590 [2024-07-26 14:20:48.736171] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:29.590 [2024-07-26 14:20:48.736690] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:29.590 [2024-07-26 14:20:48.736711] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:29.590 [2024-07-26 14:20:48.736723] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.590 [2024-07-26 14:20:48.745215] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.590 [2024-07-26 14:20:48.745250] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.590 [2024-07-26 14:20:48.754047] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.590 [2024-07-26 14:20:48.754798] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:29.590 [2024-07-26 14:20:48.771021] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:29.590 { 00:15:29.590 "ublk_device": "/dev/ublkb0", 00:15:29.590 "id": 0, 00:15:29.590 "queue_depth": 512, 00:15:29.590 "num_queues": 4, 00:15:29.590 "bdev_name": "Malloc0" 00:15:29.590 }, 00:15:29.590 { 00:15:29.590 "ublk_device": "/dev/ublkb1", 00:15:29.590 "id": 1, 00:15:29.590 "queue_depth": 512, 00:15:29.590 "num_queues": 4, 00:15:29.590 "bdev_name": "Malloc1" 00:15:29.590 }, 00:15:29.590 { 00:15:29.590 "ublk_device": "/dev/ublkb2", 00:15:29.590 "id": 2, 00:15:29.590 "queue_depth": 512, 00:15:29.590 "num_queues": 4, 00:15:29.590 "bdev_name": "Malloc2" 00:15:29.590 }, 00:15:29.590 { 00:15:29.590 "ublk_device": "/dev/ublkb3", 00:15:29.590 "id": 3, 00:15:29.590 "queue_depth": 512, 00:15:29.590 "num_queues": 4, 00:15:29.590 "bdev_name": "Malloc3" 00:15:29.590 } 00:15:29.590 ]' 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:29.590 14:20:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:29.590 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:29.847 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.106 [2024-07-26 14:20:49.795160] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:30.106 [2024-07-26 14:20:49.837973] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:30.106 [2024-07-26 14:20:49.843268] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:30.106 [2024-07-26 14:20:49.851998] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:30.106 [2024-07-26 14:20:49.852382] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:30.106 [2024-07-26 14:20:49.852416] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.106 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.106 [2024-07-26 14:20:49.861151] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:30.364 [2024-07-26 14:20:49.893419] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:30.364 [2024-07-26 14:20:49.898211] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:30.364 [2024-07-26 14:20:49.906038] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:30.364 [2024-07-26 14:20:49.906394] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:30.364 [2024-07-26 14:20:49.906407] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.364 [2024-07-26 14:20:49.915066] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:30.364 [2024-07-26 14:20:49.952011] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:30.364 [2024-07-26 14:20:49.957248] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:30.364 [2024-07-26 14:20:49.966104] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:30.364 [2024-07-26 14:20:49.966531] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:30.364 [2024-07-26 14:20:49.966547] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.364 14:20:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.364 [2024-07-26 14:20:49.975140] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:30.364 [2024-07-26 14:20:50.005979] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:30.364 [2024-07-26 14:20:50.010340] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:30.364 [2024-07-26 14:20:50.018038] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:30.364 [2024-07-26 14:20:50.018368] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:30.364 [2024-07-26 14:20:50.018382] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:30.364 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.365 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:30.623 [2024-07-26 14:20:50.288128] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:30.623 [2024-07-26 14:20:50.295040] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:30.623 [2024-07-26 14:20:50.295137] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:30.623 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:30.623 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.623 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:30.623 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.623 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:30.881 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.881 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:30.881 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:30.881 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.881 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.139 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.139 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.139 14:20:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:31.139 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.139 14:20:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.405 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.405 14:20:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:31.405 14:20:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:31.405 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.405 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:31.685 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:31.943 ************************************ 00:15:31.943 END TEST test_create_multi_ublk 00:15:31.943 ************************************ 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:31.943 00:15:31.943 real 0m3.867s 00:15:31.943 user 0m1.268s 00:15:31.943 sys 0m0.178s 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.943 14:20:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:31.943 14:20:51 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:31.943 14:20:51 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:31.943 14:20:51 ublk -- ublk/ublk.sh@130 -- # killprocess 76047 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@950 -- # '[' -z 76047 ']' 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@954 -- # kill -0 76047 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@955 -- # uname 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76047 00:15:31.943 killing process with pid 76047 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76047' 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@969 -- # kill 76047 00:15:31.943 14:20:51 ublk -- common/autotest_common.sh@974 -- # wait 76047 00:15:32.877 [2024-07-26 14:20:52.479140] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:15:32.877 [2024-07-26 14:20:52.479237] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:15:33.811 00:15:33.811 real 0m26.141s 00:15:33.811 user 0m39.967s 00:15:33.811 sys 0m7.708s 00:15:33.811 14:20:53 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.811 ************************************ 00:15:33.811 END TEST ublk 00:15:33.811 ************************************ 00:15:33.811 14:20:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:33.811 14:20:53 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:33.811 14:20:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:33.811 14:20:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.811 14:20:53 -- common/autotest_common.sh@10 -- # set +x 00:15:33.811 ************************************ 00:15:33.811 START TEST ublk_recovery 00:15:33.811 ************************************ 00:15:33.811 14:20:53 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:34.070 * Looking for test storage... 00:15:34.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:34.070 14:20:53 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76425 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76425 00:15:34.070 14:20:53 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76425 ']' 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.070 14:20:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:34.070 [2024-07-26 14:20:53.758781] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:34.070 [2024-07-26 14:20:53.758990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76425 ] 00:15:34.328 [2024-07-26 14:20:53.932264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:34.586 [2024-07-26 14:20:54.110142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.586 [2024-07-26 14:20:54.110150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:35.152 14:20:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.152 [2024-07-26 14:20:54.753979] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:35.152 [2024-07-26 14:20:54.756404] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.152 14:20:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.152 malloc0 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.152 14:20:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.152 [2024-07-26 14:20:54.870133] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:35.152 [2024-07-26 14:20:54.870287] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:35.152 [2024-07-26 14:20:54.870301] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:35.152 [2024-07-26 14:20:54.870312] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:35.152 [2024-07-26 14:20:54.876945] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:35.152 [2024-07-26 14:20:54.876996] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:35.152 [2024-07-26 14:20:54.883958] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:35.152 [2024-07-26 14:20:54.884141] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:35.152 [2024-07-26 14:20:54.903079] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:35.152 1 00:15:35.152 14:20:54 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.152 14:20:54 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:36.528 14:20:55 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76460 00:15:36.528 14:20:55 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:36.528 14:20:55 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:36.528 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:36.528 fio-3.35 00:15:36.528 Starting 1 process 00:15:41.804 14:21:00 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76425 00:15:41.804 14:21:00 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:47.075 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76425 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:47.075 14:21:05 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76572 00:15:47.075 14:21:05 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:47.075 14:21:05 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:47.075 14:21:05 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76572 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 76572 ']' 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.075 14:21:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.075 [2024-07-26 14:21:06.044427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:47.075 [2024-07-26 14:21:06.044844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76572 ] 00:15:47.075 [2024-07-26 14:21:06.219533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.075 [2024-07-26 14:21:06.438405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.075 [2024-07-26 14:21:06.438419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:47.642 14:21:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.642 [2024-07-26 14:21:07.123047] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:47.642 [2024-07-26 14:21:07.125678] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.642 14:21:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.642 malloc0 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.642 14:21:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:47.642 [2024-07-26 14:21:07.255183] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:47.642 [2024-07-26 14:21:07.255269] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:47.642 [2024-07-26 14:21:07.255295] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:47.642 [2024-07-26 14:21:07.263059] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:47.642 [2024-07-26 14:21:07.263103] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:47.642 [2024-07-26 14:21:07.263214] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:47.642 1 00:15:47.642 14:21:07 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.642 14:21:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76460 00:16:14.322 [2024-07-26 14:21:31.333975] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:14.322 [2024-07-26 14:21:31.341129] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:14.322 [2024-07-26 14:21:31.348253] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:14.322 [2024-07-26 14:21:31.348305] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:40.859 00:16:40.859 fio_test: (groupid=0, jobs=1): err= 0: pid=76469: Fri Jul 26 14:21:56 2024 00:16:40.859 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(2430MiB/60002msec) 00:16:40.859 slat (nsec): min=1777, max=172905, avg=6183.91, stdev=3033.15 00:16:40.859 clat (usec): min=1166, max=30439k, avg=6411.23, stdev=322862.49 00:16:40.859 lat (usec): min=1175, max=30439k, avg=6417.41, stdev=322862.50 00:16:40.859 clat percentiles (msec): 00:16:40.859 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:16:40.859 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:16:40.859 | 70.00th=[ 3], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:16:40.859 | 99.00th=[ 7], 99.50th=[ 7], 99.90th=[ 9], 99.95th=[ 11], 00:16:40.859 | 99.99th=[17113] 00:16:40.859 bw ( KiB/s): min=24560, max=92888, per=100.00%, avg=83018.58, stdev=11013.16, samples=59 00:16:40.859 iops : min= 6140, max=23222, avg=20754.64, stdev=2753.29, samples=59 00:16:40.859 write: IOPS=10.4k, BW=40.5MiB/s (42.4MB/s)(2427MiB/60002msec); 0 zone resets 00:16:40.859 slat (nsec): min=1804, max=186064, avg=6291.03, stdev=3107.00 00:16:40.859 clat (usec): min=1093, max=30439k, avg=5929.01, stdev=294039.24 00:16:40.859 lat (usec): min=1111, max=30439k, avg=5935.30, stdev=294039.24 00:16:40.859 clat percentiles (usec): 00:16:40.859 | 1.00th=[ 2474], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2802], 00:16:40.859 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:16:40.859 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3949], 00:16:40.859 | 99.00th=[ 6128], 99.50th=[ 6718], 99.90th=[ 8717], 99.95th=[10814], 00:16:40.859 | 99.99th=[13698] 00:16:40.859 bw ( KiB/s): min=24368, max=90456, per=100.00%, avg=82939.53, stdev=10950.63, samples=59 00:16:40.859 iops : min= 6092, max=22614, avg=20734.88, stdev=2737.66, samples=59 00:16:40.859 lat (msec) : 2=0.09%, 4=94.87%, 10=4.98%, 20=0.05%, >=2000=0.01% 00:16:40.859 cpu : usr=5.48%, sys=12.03%, ctx=37624, majf=0, minf=13 00:16:40.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:40.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:40.859 issued rwts: total=621961,621338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:40.859 00:16:40.859 Run status group 0 (all jobs): 00:16:40.859 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=2430MiB (2548MB), run=60002-60002msec 00:16:40.859 WRITE: bw=40.5MiB/s (42.4MB/s), 40.5MiB/s-40.5MiB/s (42.4MB/s-42.4MB/s), io=2427MiB (2545MB), run=60002-60002msec 00:16:40.859 00:16:40.859 Disk stats (read/write): 00:16:40.859 ublkb1: ios=619645/618969, merge=0/0, ticks=3925440/3554207, in_queue=7479647, util=99.94% 00:16:40.859 14:21:56 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.859 [2024-07-26 14:21:56.156671] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:40.859 [2024-07-26 14:21:56.209012] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:40.859 [2024-07-26 14:21:56.209291] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:40.859 [2024-07-26 14:21:56.216981] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:40.859 [2024-07-26 14:21:56.217125] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:40.859 [2024-07-26 14:21:56.217152] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.859 14:21:56 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.859 [2024-07-26 14:21:56.233102] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.859 [2024-07-26 14:21:56.242334] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:40.859 [2024-07-26 14:21:56.242391] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.859 14:21:56 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:40.859 14:21:56 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:40.859 14:21:56 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76572 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 76572 ']' 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 76572 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76572 00:16:40.859 killing process with pid 76572 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76572' 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@969 -- # kill 76572 00:16:40.859 14:21:56 ublk_recovery -- common/autotest_common.sh@974 -- # wait 76572 00:16:40.859 [2024-07-26 14:21:57.121572] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.859 [2024-07-26 14:21:57.121648] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:16:40.859 ************************************ 00:16:40.859 END TEST ublk_recovery 00:16:40.859 ************************************ 00:16:40.859 00:16:40.859 real 1m4.723s 00:16:40.859 user 1m50.582s 00:16:40.859 sys 0m18.399s 00:16:40.859 14:21:58 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:40.859 14:21:58 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:40.859 14:21:58 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@264 -- # timing_exit lib 00:16:40.859 14:21:58 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:40.859 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:16:40.859 14:21:58 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:16:40.859 14:21:58 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.859 14:21:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:40.859 14:21:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:40.859 14:21:58 -- common/autotest_common.sh@10 -- # set +x 00:16:40.859 ************************************ 00:16:40.859 START TEST ftl 00:16:40.859 ************************************ 00:16:40.859 14:21:58 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.860 * Looking for test storage... 00:16:40.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:40.860 14:21:58 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:40.860 14:21:58 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.860 14:21:58 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:40.860 14:21:58 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:40.860 14:21:58 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:40.860 14:21:58 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.860 14:21:58 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.860 14:21:58 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.860 14:21:58 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.860 14:21:58 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:40.860 14:21:58 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:40.860 14:21:58 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:40.860 14:21:58 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.860 14:21:58 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:40.860 14:21:58 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:40.860 14:21:58 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.860 14:21:58 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:40.860 14:21:58 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.860 14:21:58 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:40.860 14:21:58 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:40.860 14:21:58 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:40.860 14:21:58 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.860 14:21:58 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:40.860 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:40.860 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.860 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.860 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.860 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77360 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:40.860 14:21:58 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77360 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@831 -- # '[' -z 77360 ']' 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:40.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:40.860 14:21:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:40.860 [2024-07-26 14:21:59.071165] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:40.860 [2024-07-26 14:21:59.071442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77360 ] 00:16:40.860 [2024-07-26 14:21:59.237162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.860 [2024-07-26 14:21:59.396131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.860 14:21:59 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:40.860 14:21:59 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:40.860 14:21:59 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:40.860 14:22:00 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:41.426 14:22:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:41.426 14:22:01 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:41.992 14:22:01 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:41.992 14:22:01 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:41.992 14:22:01 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@50 -- # break 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@63 -- # break 00:16:42.251 14:22:01 ftl -- ftl/ftl.sh@66 -- # killprocess 77360 00:16:42.251 14:22:01 ftl -- common/autotest_common.sh@950 -- # '[' -z 77360 ']' 00:16:42.251 14:22:01 ftl -- common/autotest_common.sh@954 -- # kill -0 77360 00:16:42.251 14:22:01 ftl -- common/autotest_common.sh@955 -- # uname 00:16:42.251 14:22:01 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:42.251 14:22:01 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77360 00:16:42.251 14:22:02 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:42.251 14:22:02 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:42.251 killing process with pid 77360 00:16:42.251 14:22:02 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77360' 00:16:42.251 14:22:02 ftl -- common/autotest_common.sh@969 -- # kill 77360 00:16:42.251 14:22:02 ftl -- common/autotest_common.sh@974 -- # wait 77360 00:16:44.157 14:22:03 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:44.157 14:22:03 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:44.157 14:22:03 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:44.157 14:22:03 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.157 14:22:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:44.157 ************************************ 00:16:44.157 START TEST ftl_fio_basic 00:16:44.157 ************************************ 00:16:44.157 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:44.416 * Looking for test storage... 00:16:44.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77491 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77491 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 77491 ']' 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:44.416 14:22:03 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:44.416 [2024-07-26 14:22:04.049493] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:44.416 [2024-07-26 14:22:04.050221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77491 ] 00:16:44.675 [2024-07-26 14:22:04.206621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.675 [2024-07-26 14:22:04.384610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.675 [2024-07-26 14:22:04.384726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.675 [2024-07-26 14:22:04.384749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:45.613 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:45.872 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:45.872 { 00:16:45.872 "name": "nvme0n1", 00:16:45.872 "aliases": [ 00:16:45.872 "b2494506-8f8c-4851-8d96-0703a1657697" 00:16:45.872 ], 00:16:45.872 "product_name": "NVMe disk", 00:16:45.872 "block_size": 4096, 00:16:45.872 "num_blocks": 1310720, 00:16:45.872 "uuid": "b2494506-8f8c-4851-8d96-0703a1657697", 00:16:45.872 "assigned_rate_limits": { 00:16:45.872 "rw_ios_per_sec": 0, 00:16:45.872 "rw_mbytes_per_sec": 0, 00:16:45.872 "r_mbytes_per_sec": 0, 00:16:45.872 "w_mbytes_per_sec": 0 00:16:45.872 }, 00:16:45.872 "claimed": false, 00:16:45.872 "zoned": false, 00:16:45.872 "supported_io_types": { 00:16:45.872 "read": true, 00:16:45.872 "write": true, 00:16:45.872 "unmap": true, 00:16:45.872 "flush": true, 00:16:45.872 "reset": true, 00:16:45.872 "nvme_admin": true, 00:16:45.872 "nvme_io": true, 00:16:45.872 "nvme_io_md": false, 00:16:45.872 "write_zeroes": true, 00:16:45.872 "zcopy": false, 00:16:45.872 "get_zone_info": false, 00:16:45.872 "zone_management": false, 00:16:45.872 "zone_append": false, 00:16:45.872 "compare": true, 00:16:45.872 "compare_and_write": false, 00:16:45.872 "abort": true, 00:16:45.872 "seek_hole": false, 00:16:45.872 "seek_data": false, 00:16:45.872 "copy": true, 00:16:45.872 "nvme_iov_md": false 00:16:45.872 }, 00:16:45.872 "driver_specific": { 00:16:45.872 "nvme": [ 00:16:45.872 { 00:16:45.872 "pci_address": "0000:00:11.0", 00:16:45.872 "trid": { 00:16:45.872 "trtype": "PCIe", 00:16:45.872 "traddr": "0000:00:11.0" 00:16:45.872 }, 00:16:45.872 "ctrlr_data": { 00:16:45.872 "cntlid": 0, 00:16:45.872 "vendor_id": "0x1b36", 00:16:45.872 "model_number": "QEMU NVMe Ctrl", 00:16:45.872 "serial_number": "12341", 00:16:45.872 "firmware_revision": "8.0.0", 00:16:45.872 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:45.872 "oacs": { 00:16:45.872 "security": 0, 00:16:45.872 "format": 1, 00:16:45.872 "firmware": 0, 00:16:45.872 "ns_manage": 1 00:16:45.872 }, 00:16:45.872 "multi_ctrlr": false, 00:16:45.872 "ana_reporting": false 00:16:45.872 }, 00:16:45.872 "vs": { 00:16:45.872 "nvme_version": "1.4" 00:16:45.872 }, 00:16:45.872 "ns_data": { 00:16:45.872 "id": 1, 00:16:45.872 "can_share": false 00:16:45.872 } 00:16:45.872 } 00:16:45.872 ], 00:16:45.872 "mp_policy": "active_passive" 00:16:45.872 } 00:16:45.872 } 00:16:45.872 ]' 00:16:45.872 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:46.131 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:46.419 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:46.419 14:22:05 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:46.687 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5d73a282-9883-4cdd-80dd-e9e999e5ff67 00:16:46.687 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5d73a282-9883-4cdd-80dd-e9e999e5ff67 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:46.946 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:47.205 { 00:16:47.205 "name": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:47.205 "aliases": [ 00:16:47.205 "lvs/nvme0n1p0" 00:16:47.205 ], 00:16:47.205 "product_name": "Logical Volume", 00:16:47.205 "block_size": 4096, 00:16:47.205 "num_blocks": 26476544, 00:16:47.205 "uuid": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:47.205 "assigned_rate_limits": { 00:16:47.205 "rw_ios_per_sec": 0, 00:16:47.205 "rw_mbytes_per_sec": 0, 00:16:47.205 "r_mbytes_per_sec": 0, 00:16:47.205 "w_mbytes_per_sec": 0 00:16:47.205 }, 00:16:47.205 "claimed": false, 00:16:47.205 "zoned": false, 00:16:47.205 "supported_io_types": { 00:16:47.205 "read": true, 00:16:47.205 "write": true, 00:16:47.205 "unmap": true, 00:16:47.205 "flush": false, 00:16:47.205 "reset": true, 00:16:47.205 "nvme_admin": false, 00:16:47.205 "nvme_io": false, 00:16:47.205 "nvme_io_md": false, 00:16:47.205 "write_zeroes": true, 00:16:47.205 "zcopy": false, 00:16:47.205 "get_zone_info": false, 00:16:47.205 "zone_management": false, 00:16:47.205 "zone_append": false, 00:16:47.205 "compare": false, 00:16:47.205 "compare_and_write": false, 00:16:47.205 "abort": false, 00:16:47.205 "seek_hole": true, 00:16:47.205 "seek_data": true, 00:16:47.205 "copy": false, 00:16:47.205 "nvme_iov_md": false 00:16:47.205 }, 00:16:47.205 "driver_specific": { 00:16:47.205 "lvol": { 00:16:47.205 "lvol_store_uuid": "5d73a282-9883-4cdd-80dd-e9e999e5ff67", 00:16:47.205 "base_bdev": "nvme0n1", 00:16:47.205 "thin_provision": true, 00:16:47.205 "num_allocated_clusters": 0, 00:16:47.205 "snapshot": false, 00:16:47.205 "clone": false, 00:16:47.205 "esnap_clone": false 00:16:47.205 } 00:16:47.205 } 00:16:47.205 } 00:16:47.205 ]' 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:47.205 14:22:06 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:47.464 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:47.723 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:47.723 { 00:16:47.723 "name": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:47.723 "aliases": [ 00:16:47.723 "lvs/nvme0n1p0" 00:16:47.723 ], 00:16:47.723 "product_name": "Logical Volume", 00:16:47.723 "block_size": 4096, 00:16:47.723 "num_blocks": 26476544, 00:16:47.723 "uuid": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:47.723 "assigned_rate_limits": { 00:16:47.723 "rw_ios_per_sec": 0, 00:16:47.723 "rw_mbytes_per_sec": 0, 00:16:47.723 "r_mbytes_per_sec": 0, 00:16:47.723 "w_mbytes_per_sec": 0 00:16:47.723 }, 00:16:47.723 "claimed": false, 00:16:47.723 "zoned": false, 00:16:47.723 "supported_io_types": { 00:16:47.723 "read": true, 00:16:47.723 "write": true, 00:16:47.723 "unmap": true, 00:16:47.723 "flush": false, 00:16:47.723 "reset": true, 00:16:47.723 "nvme_admin": false, 00:16:47.723 "nvme_io": false, 00:16:47.723 "nvme_io_md": false, 00:16:47.723 "write_zeroes": true, 00:16:47.723 "zcopy": false, 00:16:47.723 "get_zone_info": false, 00:16:47.723 "zone_management": false, 00:16:47.723 "zone_append": false, 00:16:47.723 "compare": false, 00:16:47.723 "compare_and_write": false, 00:16:47.723 "abort": false, 00:16:47.723 "seek_hole": true, 00:16:47.723 "seek_data": true, 00:16:47.723 "copy": false, 00:16:47.723 "nvme_iov_md": false 00:16:47.723 }, 00:16:47.723 "driver_specific": { 00:16:47.723 "lvol": { 00:16:47.723 "lvol_store_uuid": "5d73a282-9883-4cdd-80dd-e9e999e5ff67", 00:16:47.723 "base_bdev": "nvme0n1", 00:16:47.723 "thin_provision": true, 00:16:47.723 "num_allocated_clusters": 0, 00:16:47.723 "snapshot": false, 00:16:47.723 "clone": false, 00:16:47.723 "esnap_clone": false 00:16:47.723 } 00:16:47.723 } 00:16:47.723 } 00:16:47.723 ]' 00:16:47.723 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:47.723 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:47.723 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:47.983 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:47.983 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:47.983 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:47.983 14:22:07 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:47.983 14:22:07 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:48.242 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:48.242 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 00:16:48.243 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:48.243 { 00:16:48.243 "name": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:48.243 "aliases": [ 00:16:48.243 "lvs/nvme0n1p0" 00:16:48.243 ], 00:16:48.243 "product_name": "Logical Volume", 00:16:48.243 "block_size": 4096, 00:16:48.243 "num_blocks": 26476544, 00:16:48.243 "uuid": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:48.243 "assigned_rate_limits": { 00:16:48.243 "rw_ios_per_sec": 0, 00:16:48.243 "rw_mbytes_per_sec": 0, 00:16:48.243 "r_mbytes_per_sec": 0, 00:16:48.243 "w_mbytes_per_sec": 0 00:16:48.243 }, 00:16:48.243 "claimed": false, 00:16:48.243 "zoned": false, 00:16:48.243 "supported_io_types": { 00:16:48.243 "read": true, 00:16:48.243 "write": true, 00:16:48.243 "unmap": true, 00:16:48.243 "flush": false, 00:16:48.243 "reset": true, 00:16:48.243 "nvme_admin": false, 00:16:48.243 "nvme_io": false, 00:16:48.243 "nvme_io_md": false, 00:16:48.243 "write_zeroes": true, 00:16:48.243 "zcopy": false, 00:16:48.243 "get_zone_info": false, 00:16:48.243 "zone_management": false, 00:16:48.243 "zone_append": false, 00:16:48.243 "compare": false, 00:16:48.243 "compare_and_write": false, 00:16:48.243 "abort": false, 00:16:48.243 "seek_hole": true, 00:16:48.243 "seek_data": true, 00:16:48.243 "copy": false, 00:16:48.243 "nvme_iov_md": false 00:16:48.243 }, 00:16:48.243 "driver_specific": { 00:16:48.243 "lvol": { 00:16:48.243 "lvol_store_uuid": "5d73a282-9883-4cdd-80dd-e9e999e5ff67", 00:16:48.243 "base_bdev": "nvme0n1", 00:16:48.243 "thin_provision": true, 00:16:48.243 "num_allocated_clusters": 0, 00:16:48.243 "snapshot": false, 00:16:48.243 "clone": false, 00:16:48.243 "esnap_clone": false 00:16:48.243 } 00:16:48.243 } 00:16:48.243 } 00:16:48.243 ]' 00:16:48.243 14:22:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:48.502 14:22:08 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0f6c5c6e-a5b4-4d4c-ab15-65a163308585 -c nvc0n1p0 --l2p_dram_limit 60 00:16:48.761 [2024-07-26 14:22:08.274335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.274400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:48.761 [2024-07-26 14:22:08.274438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:48.761 [2024-07-26 14:22:08.274452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.274534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.274555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:48.761 [2024-07-26 14:22:08.274567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:16:48.761 [2024-07-26 14:22:08.274580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.274614] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:48.761 [2024-07-26 14:22:08.275652] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:48.761 [2024-07-26 14:22:08.275689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.275709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:48.761 [2024-07-26 14:22:08.275723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.082 ms 00:16:48.761 [2024-07-26 14:22:08.275737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.275885] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 286a91dd-e7d5-45a7-b527-c1b25b1d92b9 00:16:48.761 [2024-07-26 14:22:08.276992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.277048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:48.761 [2024-07-26 14:22:08.277068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:16:48.761 [2024-07-26 14:22:08.277081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.281647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.281710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:48.761 [2024-07-26 14:22:08.281750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.491 ms 00:16:48.761 [2024-07-26 14:22:08.281761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.281891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.281926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:48.761 [2024-07-26 14:22:08.281942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:16:48.761 [2024-07-26 14:22:08.281954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.761 [2024-07-26 14:22:08.282073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.761 [2024-07-26 14:22:08.282092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:48.762 [2024-07-26 14:22:08.282106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:48.762 [2024-07-26 14:22:08.282119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.762 [2024-07-26 14:22:08.282162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:48.762 [2024-07-26 14:22:08.286439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.762 [2024-07-26 14:22:08.286486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:48.762 [2024-07-26 14:22:08.286519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.289 ms 00:16:48.762 [2024-07-26 14:22:08.286532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.762 [2024-07-26 14:22:08.286587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.762 [2024-07-26 14:22:08.286606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:48.762 [2024-07-26 14:22:08.286618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:48.762 [2024-07-26 14:22:08.286631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.762 [2024-07-26 14:22:08.286681] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:48.762 [2024-07-26 14:22:08.286844] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:48.762 [2024-07-26 14:22:08.286865] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:48.762 [2024-07-26 14:22:08.286885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:48.762 [2024-07-26 14:22:08.286938] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:48.762 [2024-07-26 14:22:08.286960] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:48.762 [2024-07-26 14:22:08.286973] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:48.762 [2024-07-26 14:22:08.287002] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:48.762 [2024-07-26 14:22:08.287016] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:48.762 [2024-07-26 14:22:08.287029] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:48.762 [2024-07-26 14:22:08.287041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.762 [2024-07-26 14:22:08.287054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:48.762 [2024-07-26 14:22:08.287067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:16:48.762 [2024-07-26 14:22:08.287080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.762 [2024-07-26 14:22:08.287179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.762 [2024-07-26 14:22:08.287197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:48.762 [2024-07-26 14:22:08.287209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:16:48.762 [2024-07-26 14:22:08.287222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.762 [2024-07-26 14:22:08.287406] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:48.762 [2024-07-26 14:22:08.287429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:48.762 [2024-07-26 14:22:08.287456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287472] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:48.762 [2024-07-26 14:22:08.287499] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:48.762 [2024-07-26 14:22:08.287535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.762 [2024-07-26 14:22:08.287559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:48.762 [2024-07-26 14:22:08.287574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:48.762 [2024-07-26 14:22:08.287584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:48.762 [2024-07-26 14:22:08.287598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:48.762 [2024-07-26 14:22:08.287614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:48.762 [2024-07-26 14:22:08.287627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:48.762 [2024-07-26 14:22:08.287653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287664] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:48.762 [2024-07-26 14:22:08.287688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287701] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:48.762 [2024-07-26 14:22:08.287725] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:48.762 [2024-07-26 14:22:08.287760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287773] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:48.762 [2024-07-26 14:22:08.287797] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287807] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:48.762 [2024-07-26 14:22:08.287820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:48.762 [2024-07-26 14:22:08.287831] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287847] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.762 [2024-07-26 14:22:08.287858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:48.762 [2024-07-26 14:22:08.287871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:48.762 [2024-07-26 14:22:08.287882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:48.762 [2024-07-26 14:22:08.287908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:48.762 [2024-07-26 14:22:08.287923] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:48.762 [2024-07-26 14:22:08.287937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:48.762 [2024-07-26 14:22:08.287961] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:48.762 [2024-07-26 14:22:08.287972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.287984] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:48.762 [2024-07-26 14:22:08.287997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:48.762 [2024-07-26 14:22:08.288033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:48.762 [2024-07-26 14:22:08.288048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:48.762 [2024-07-26 14:22:08.288063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:48.762 [2024-07-26 14:22:08.288075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:48.762 [2024-07-26 14:22:08.288091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:48.762 [2024-07-26 14:22:08.288102] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:48.762 [2024-07-26 14:22:08.288115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:48.762 [2024-07-26 14:22:08.288126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:48.762 [2024-07-26 14:22:08.288144] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:48.762 [2024-07-26 14:22:08.288160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.762 [2024-07-26 14:22:08.288180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:48.762 [2024-07-26 14:22:08.288192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:48.762 [2024-07-26 14:22:08.288206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:48.762 [2024-07-26 14:22:08.288218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:48.762 [2024-07-26 14:22:08.288233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:48.762 [2024-07-26 14:22:08.288245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:48.762 [2024-07-26 14:22:08.288259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:48.762 [2024-07-26 14:22:08.288271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:48.762 [2024-07-26 14:22:08.288284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:48.762 [2024-07-26 14:22:08.288296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:48.762 [2024-07-26 14:22:08.288312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:48.762 [2024-07-26 14:22:08.288328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:48.762 [2024-07-26 14:22:08.288341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:48.762 [2024-07-26 14:22:08.288354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:48.762 [2024-07-26 14:22:08.288368] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:48.763 [2024-07-26 14:22:08.288381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:48.763 [2024-07-26 14:22:08.288397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:48.763 [2024-07-26 14:22:08.288410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:48.763 [2024-07-26 14:22:08.288424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:48.763 [2024-07-26 14:22:08.288436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:48.763 [2024-07-26 14:22:08.288451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:48.763 [2024-07-26 14:22:08.288464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:48.763 [2024-07-26 14:22:08.288479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.125 ms 00:16:48.763 [2024-07-26 14:22:08.288493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:48.763 [2024-07-26 14:22:08.288574] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:48.763 [2024-07-26 14:22:08.288591] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:51.293 [2024-07-26 14:22:10.820637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.293 [2024-07-26 14:22:10.820703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:51.293 [2024-07-26 14:22:10.820743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2532.074 ms 00:16:51.293 [2024-07-26 14:22:10.820755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.293 [2024-07-26 14:22:10.849977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.293 [2024-07-26 14:22:10.850037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:51.293 [2024-07-26 14:22:10.850077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.961 ms 00:16:51.293 [2024-07-26 14:22:10.850089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.850284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.850303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:51.294 [2024-07-26 14:22:10.850318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:51.294 [2024-07-26 14:22:10.850333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.899343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.899431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:51.294 [2024-07-26 14:22:10.899485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.943 ms 00:16:51.294 [2024-07-26 14:22:10.899502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.899583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.899603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:51.294 [2024-07-26 14:22:10.899625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.294 [2024-07-26 14:22:10.899640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.900175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.900213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:51.294 [2024-07-26 14:22:10.900236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:16:51.294 [2024-07-26 14:22:10.900251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.900450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.900490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:51.294 [2024-07-26 14:22:10.900512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:16:51.294 [2024-07-26 14:22:10.900526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.920775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.920826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:51.294 [2024-07-26 14:22:10.920865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:16:51.294 [2024-07-26 14:22:10.920876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.933093] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:51.294 [2024-07-26 14:22:10.946304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.946393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:51.294 [2024-07-26 14:22:10.946414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.215 ms 00:16:51.294 [2024-07-26 14:22:10.946428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.997415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.997508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:51.294 [2024-07-26 14:22:10.997529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.925 ms 00:16:51.294 [2024-07-26 14:22:10.997544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:10.997817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:10.997841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:51.294 [2024-07-26 14:22:10.997855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:16:51.294 [2024-07-26 14:22:10.997870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.294 [2024-07-26 14:22:11.026039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.294 [2024-07-26 14:22:11.026087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:51.294 [2024-07-26 14:22:11.026106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.060 ms 00:16:51.294 [2024-07-26 14:22:11.026119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.055985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.056029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:51.554 [2024-07-26 14:22:11.056064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.815 ms 00:16:51.554 [2024-07-26 14:22:11.056093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.056858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.056942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:51.554 [2024-07-26 14:22:11.056961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:16:51.554 [2024-07-26 14:22:11.056976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.142326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.142401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:51.554 [2024-07-26 14:22:11.142438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.270 ms 00:16:51.554 [2024-07-26 14:22:11.142454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.171991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.172045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:51.554 [2024-07-26 14:22:11.172063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.478 ms 00:16:51.554 [2024-07-26 14:22:11.172077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.200108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.200160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:51.554 [2024-07-26 14:22:11.200177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.974 ms 00:16:51.554 [2024-07-26 14:22:11.200190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.228439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.228501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:51.554 [2024-07-26 14:22:11.228520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.196 ms 00:16:51.554 [2024-07-26 14:22:11.228533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.228596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.228617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:51.554 [2024-07-26 14:22:11.228631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:51.554 [2024-07-26 14:22:11.228646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.228790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.554 [2024-07-26 14:22:11.228813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:51.554 [2024-07-26 14:22:11.228826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:51.554 [2024-07-26 14:22:11.228839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.554 [2024-07-26 14:22:11.230129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2955.209 ms, result 0 00:16:51.554 { 00:16:51.554 "name": "ftl0", 00:16:51.554 "uuid": "286a91dd-e7d5-45a7-b527-c1b25b1d92b9" 00:16:51.554 } 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:51.554 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:51.812 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:52.071 [ 00:16:52.071 { 00:16:52.071 "name": "ftl0", 00:16:52.071 "aliases": [ 00:16:52.071 "286a91dd-e7d5-45a7-b527-c1b25b1d92b9" 00:16:52.071 ], 00:16:52.071 "product_name": "FTL disk", 00:16:52.071 "block_size": 4096, 00:16:52.071 "num_blocks": 20971520, 00:16:52.071 "uuid": "286a91dd-e7d5-45a7-b527-c1b25b1d92b9", 00:16:52.071 "assigned_rate_limits": { 00:16:52.071 "rw_ios_per_sec": 0, 00:16:52.071 "rw_mbytes_per_sec": 0, 00:16:52.071 "r_mbytes_per_sec": 0, 00:16:52.071 "w_mbytes_per_sec": 0 00:16:52.071 }, 00:16:52.071 "claimed": false, 00:16:52.071 "zoned": false, 00:16:52.071 "supported_io_types": { 00:16:52.071 "read": true, 00:16:52.071 "write": true, 00:16:52.071 "unmap": true, 00:16:52.071 "flush": true, 00:16:52.071 "reset": false, 00:16:52.071 "nvme_admin": false, 00:16:52.071 "nvme_io": false, 00:16:52.071 "nvme_io_md": false, 00:16:52.071 "write_zeroes": true, 00:16:52.071 "zcopy": false, 00:16:52.071 "get_zone_info": false, 00:16:52.071 "zone_management": false, 00:16:52.071 "zone_append": false, 00:16:52.071 "compare": false, 00:16:52.071 "compare_and_write": false, 00:16:52.071 "abort": false, 00:16:52.071 "seek_hole": false, 00:16:52.071 "seek_data": false, 00:16:52.071 "copy": false, 00:16:52.071 "nvme_iov_md": false 00:16:52.071 }, 00:16:52.071 "driver_specific": { 00:16:52.071 "ftl": { 00:16:52.071 "base_bdev": "0f6c5c6e-a5b4-4d4c-ab15-65a163308585", 00:16:52.071 "cache": "nvc0n1p0" 00:16:52.071 } 00:16:52.071 } 00:16:52.071 } 00:16:52.071 ] 00:16:52.071 14:22:11 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:52.071 14:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:52.071 14:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:52.330 14:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:52.330 14:22:11 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:52.590 [2024-07-26 14:22:12.150864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.151196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:52.590 [2024-07-26 14:22:12.151359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:52.590 [2024-07-26 14:22:12.151500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.151671] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:52.590 [2024-07-26 14:22:12.154807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.154845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:52.590 [2024-07-26 14:22:12.154878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.100 ms 00:16:52.590 [2024-07-26 14:22:12.154891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.155351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.155384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:52.590 [2024-07-26 14:22:12.155399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:16:52.590 [2024-07-26 14:22:12.155415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.158617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.158652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:52.590 [2024-07-26 14:22:12.158684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:16:52.590 [2024-07-26 14:22:12.158697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.164940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.164976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:52.590 [2024-07-26 14:22:12.165007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:16:52.590 [2024-07-26 14:22:12.165024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.193320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.193381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:52.590 [2024-07-26 14:22:12.193399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.208 ms 00:16:52.590 [2024-07-26 14:22:12.193412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.210800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.210868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:52.590 [2024-07-26 14:22:12.210886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.337 ms 00:16:52.590 [2024-07-26 14:22:12.210900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.211239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.211267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:52.590 [2024-07-26 14:22:12.211281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:16:52.590 [2024-07-26 14:22:12.211296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.240528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.240589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:52.590 [2024-07-26 14:22:12.240606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.183 ms 00:16:52.590 [2024-07-26 14:22:12.240619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.268788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.268848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:52.590 [2024-07-26 14:22:12.268865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.117 ms 00:16:52.590 [2024-07-26 14:22:12.268878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.298536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.298618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:52.590 [2024-07-26 14:22:12.298636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.577 ms 00:16:52.590 [2024-07-26 14:22:12.298648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.330005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.590 [2024-07-26 14:22:12.330065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:52.590 [2024-07-26 14:22:12.330100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.221 ms 00:16:52.590 [2024-07-26 14:22:12.330114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.590 [2024-07-26 14:22:12.330170] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:52.590 [2024-07-26 14:22:12.330200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:52.590 [2024-07-26 14:22:12.330343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.330984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:52.591 [2024-07-26 14:22:12.331669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:52.592 [2024-07-26 14:22:12.331681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:52.592 [2024-07-26 14:22:12.331707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:52.592 [2024-07-26 14:22:12.331720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 286a91dd-e7d5-45a7-b527-c1b25b1d92b9 00:16:52.592 [2024-07-26 14:22:12.331735] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:52.592 [2024-07-26 14:22:12.331750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:52.592 [2024-07-26 14:22:12.331766] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:52.592 [2024-07-26 14:22:12.331778] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:52.592 [2024-07-26 14:22:12.331792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:52.592 [2024-07-26 14:22:12.331804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:52.592 [2024-07-26 14:22:12.331818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:52.592 [2024-07-26 14:22:12.331829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:52.592 [2024-07-26 14:22:12.331848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:52.592 [2024-07-26 14:22:12.331860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.592 [2024-07-26 14:22:12.331874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:52.592 [2024-07-26 14:22:12.331887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:16:52.592 [2024-07-26 14:22:12.331915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.592 [2024-07-26 14:22:12.349269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.592 [2024-07-26 14:22:12.349333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:52.592 [2024-07-26 14:22:12.349352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.257 ms 00:16:52.592 [2024-07-26 14:22:12.349367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.592 [2024-07-26 14:22:12.349862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:52.592 [2024-07-26 14:22:12.349891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:52.592 [2024-07-26 14:22:12.349922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:16:52.592 [2024-07-26 14:22:12.349937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.405218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.405298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:52.851 [2024-07-26 14:22:12.405316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.405330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.405409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.405426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:52.851 [2024-07-26 14:22:12.405438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.405450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.405588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.405612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:52.851 [2024-07-26 14:22:12.405625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.405638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.405674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.405692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:52.851 [2024-07-26 14:22:12.405705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.405717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.504550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.504606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:52.851 [2024-07-26 14:22:12.504640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.504654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.588685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.588786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:52.851 [2024-07-26 14:22:12.588806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.588820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.588993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:52.851 [2024-07-26 14:22:12.589037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:52.851 [2024-07-26 14:22:12.589190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:52.851 [2024-07-26 14:22:12.589391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:52.851 [2024-07-26 14:22:12.589515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:52.851 [2024-07-26 14:22:12.589628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:52.851 [2024-07-26 14:22:12.589730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:52.851 [2024-07-26 14:22:12.589744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:52.851 [2024-07-26 14:22:12.589758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:52.851 [2024-07-26 14:22:12.589966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.058 ms, result 0 00:16:52.851 true 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77491 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 77491 ']' 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 77491 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77491 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:53.110 killing process with pid 77491 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77491' 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 77491 00:16:53.110 14:22:12 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 77491 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.302 14:22:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:57.302 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:57.302 fio-3.35 00:16:57.302 Starting 1 thread 00:17:03.867 00:17:03.867 test: (groupid=0, jobs=1): err= 0: pid=77694: Fri Jul 26 14:22:22 2024 00:17:03.867 read: IOPS=901, BW=59.8MiB/s (62.7MB/s)(255MiB/4254msec) 00:17:03.867 slat (nsec): min=5409, max=41870, avg=7690.09, stdev=3751.42 00:17:03.867 clat (usec): min=338, max=1103, avg=493.54, stdev=50.80 00:17:03.867 lat (usec): min=346, max=1123, avg=501.23, stdev=51.65 00:17:03.867 clat percentiles (usec): 00:17:03.867 | 1.00th=[ 388], 5.00th=[ 429], 10.00th=[ 445], 20.00th=[ 457], 00:17:03.867 | 30.00th=[ 469], 40.00th=[ 478], 50.00th=[ 486], 60.00th=[ 494], 00:17:03.867 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 586], 00:17:03.868 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 791], 99.95th=[ 816], 00:17:03.868 | 99.99th=[ 1106] 00:17:03.868 write: IOPS=907, BW=60.3MiB/s (63.2MB/s)(256MiB/4249msec); 0 zone resets 00:17:03.868 slat (usec): min=19, max=116, avg=25.19, stdev= 7.55 00:17:03.868 clat (usec): min=378, max=973, avg=565.15, stdev=63.29 00:17:03.868 lat (usec): min=400, max=1050, avg=590.34, stdev=63.74 00:17:03.868 clat percentiles (usec): 00:17:03.868 | 1.00th=[ 449], 5.00th=[ 482], 10.00th=[ 494], 20.00th=[ 519], 00:17:03.868 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 562], 60.00th=[ 570], 00:17:03.868 | 70.00th=[ 586], 80.00th=[ 603], 90.00th=[ 635], 95.00th=[ 660], 00:17:03.868 | 99.00th=[ 824], 99.50th=[ 873], 99.90th=[ 955], 99.95th=[ 971], 00:17:03.868 | 99.99th=[ 971] 00:17:03.868 bw ( KiB/s): min=60248, max=62560, per=100.00%, avg=61710.00, stdev=740.46, samples=8 00:17:03.868 iops : min= 886, max= 920, avg=907.50, stdev=10.89, samples=8 00:17:03.868 lat (usec) : 500=38.00%, 750=61.15%, 1000=0.83% 00:17:03.868 lat (msec) : 2=0.01% 00:17:03.868 cpu : usr=99.06%, sys=0.16%, ctx=6, majf=0, minf=1171 00:17:03.868 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:03.868 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.868 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:03.868 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:03.868 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:03.868 00:17:03.868 Run status group 0 (all jobs): 00:17:03.868 READ: bw=59.8MiB/s (62.7MB/s), 59.8MiB/s-59.8MiB/s (62.7MB/s-62.7MB/s), io=255MiB (267MB), run=4254-4254msec 00:17:03.868 WRITE: bw=60.3MiB/s (63.2MB/s), 60.3MiB/s-60.3MiB/s (63.2MB/s-63.2MB/s), io=256MiB (269MB), run=4249-4249msec 00:17:04.126 ----------------------------------------------------- 00:17:04.126 Suppressions used: 00:17:04.126 count bytes template 00:17:04.126 1 5 /usr/src/fio/parse.c 00:17:04.126 1 8 libtcmalloc_minimal.so 00:17:04.126 1 904 libcrypto.so 00:17:04.126 ----------------------------------------------------- 00:17:04.126 00:17:04.126 14:22:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:04.126 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:04.126 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:04.384 14:22:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:04.384 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:04.384 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:04.384 fio-3.35 00:17:04.384 Starting 2 threads 00:17:36.474 00:17:36.474 first_half: (groupid=0, jobs=1): err= 0: pid=77797: Fri Jul 26 14:22:54 2024 00:17:36.474 read: IOPS=2206, BW=8827KiB/s (9038kB/s)(255MiB/29568msec) 00:17:36.474 slat (nsec): min=4360, max=87733, avg=7562.16, stdev=2786.75 00:17:36.474 clat (usec): min=821, max=307592, avg=43932.95, stdev=21479.66 00:17:36.474 lat (usec): min=827, max=307599, avg=43940.52, stdev=21479.81 00:17:36.474 clat percentiles (msec): 00:17:36.474 | 1.00th=[ 11], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 40], 00:17:36.474 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:17:36.474 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 54], 00:17:36.474 | 99.00th=[ 163], 99.50th=[ 192], 99.90th=[ 241], 99.95th=[ 264], 00:17:36.474 | 99.99th=[ 300] 00:17:36.474 write: IOPS=2614, BW=10.2MiB/s (10.7MB/s)(256MiB/25071msec); 0 zone resets 00:17:36.474 slat (usec): min=5, max=363, avg= 9.66, stdev= 5.81 00:17:36.474 clat (usec): min=462, max=124469, avg=13953.99, stdev=23987.87 00:17:36.474 lat (usec): min=473, max=124478, avg=13963.65, stdev=23988.22 00:17:36.474 clat percentiles (usec): 00:17:36.474 | 1.00th=[ 947], 5.00th=[ 1287], 10.00th=[ 1516], 20.00th=[ 2008], 00:17:36.474 | 30.00th=[ 3916], 40.00th=[ 5538], 50.00th=[ 6390], 60.00th=[ 7177], 00:17:36.474 | 70.00th=[ 8291], 80.00th=[ 12911], 90.00th=[ 37487], 95.00th=[ 88605], 00:17:36.474 | 99.00th=[100140], 99.50th=[103285], 99.90th=[114820], 99.95th=[120062], 00:17:36.474 | 99.99th=[123208] 00:17:36.474 bw ( KiB/s): min= 8, max=42736, per=89.52%, avg=18720.18, stdev=11800.85, samples=28 00:17:36.474 iops : min= 2, max=10684, avg=4679.96, stdev=2950.27, samples=28 00:17:36.474 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.66% 00:17:36.474 lat (msec) : 2=9.38%, 4=5.48%, 10=22.65%, 20=7.84%, 50=46.85% 00:17:36.474 lat (msec) : 100=5.23%, 250=1.79%, 500=0.04% 00:17:36.474 cpu : usr=98.97%, sys=0.30%, ctx=42, majf=0, minf=5599 00:17:36.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:36.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.474 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.474 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.474 second_half: (groupid=0, jobs=1): err= 0: pid=77798: Fri Jul 26 14:22:54 2024 00:17:36.474 read: IOPS=2215, BW=8862KiB/s (9074kB/s)(255MiB/29422msec) 00:17:36.474 slat (nsec): min=4516, max=52789, avg=7714.59, stdev=2811.84 00:17:36.474 clat (usec): min=874, max=313516, avg=44551.29, stdev=20276.88 00:17:36.474 lat (usec): min=884, max=313525, avg=44559.00, stdev=20277.05 00:17:36.474 clat percentiles (msec): 00:17:36.474 | 1.00th=[ 9], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:17:36.474 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:17:36.474 | 70.00th=[ 42], 80.00th=[ 43], 90.00th=[ 47], 95.00th=[ 57], 00:17:36.474 | 99.00th=[ 159], 99.50th=[ 188], 99.90th=[ 209], 99.95th=[ 218], 00:17:36.474 | 99.99th=[ 305] 00:17:36.474 write: IOPS=2871, BW=11.2MiB/s (11.8MB/s)(256MiB/22821msec); 0 zone resets 00:17:36.474 slat (usec): min=5, max=314, avg= 9.91, stdev= 5.88 00:17:36.474 clat (usec): min=537, max=125534, avg=13125.26, stdev=23820.18 00:17:36.474 lat (usec): min=571, max=125542, avg=13135.17, stdev=23820.32 00:17:36.474 clat percentiles (usec): 00:17:36.474 | 1.00th=[ 1004], 5.00th=[ 1287], 10.00th=[ 1467], 20.00th=[ 1745], 00:17:36.474 | 30.00th=[ 2114], 40.00th=[ 3720], 50.00th=[ 5473], 60.00th=[ 6915], 00:17:36.474 | 70.00th=[ 8717], 80.00th=[ 12780], 90.00th=[ 23200], 95.00th=[ 87557], 00:17:36.474 | 99.00th=[101188], 99.50th=[103285], 99.90th=[120062], 99.95th=[122160], 00:17:36.474 | 99.99th=[124257] 00:17:36.474 bw ( KiB/s): min= 88, max=48704, per=92.84%, avg=19414.63, stdev=12670.84, samples=27 00:17:36.474 iops : min= 22, max=12176, avg=4853.70, stdev=3167.68, samples=27 00:17:36.474 lat (usec) : 750=0.05%, 1000=0.45% 00:17:36.474 lat (msec) : 2=13.52%, 4=7.18%, 10=16.08%, 20=8.60%, 50=46.92% 00:17:36.474 lat (msec) : 100=5.07%, 250=2.13%, 500=0.01% 00:17:36.474 cpu : usr=98.94%, sys=0.36%, ctx=52, majf=0, minf=5534 00:17:36.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:36.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.474 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.474 issued rwts: total=65182,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.474 00:17:36.474 Run status group 0 (all jobs): 00:17:36.474 READ: bw=17.2MiB/s (18.1MB/s), 8827KiB/s-8862KiB/s (9038kB/s-9074kB/s), io=509MiB (534MB), run=29422-29568msec 00:17:36.474 WRITE: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-11.2MiB/s (10.7MB/s-11.8MB/s), io=512MiB (537MB), run=22821-25071msec 00:17:37.041 ----------------------------------------------------- 00:17:37.041 Suppressions used: 00:17:37.041 count bytes template 00:17:37.041 2 10 /usr/src/fio/parse.c 00:17:37.041 3 288 /usr/src/fio/iolog.c 00:17:37.041 1 8 libtcmalloc_minimal.so 00:17:37.041 1 904 libcrypto.so 00:17:37.041 ----------------------------------------------------- 00:17:37.041 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:37.041 14:22:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:37.299 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:37.299 fio-3.35 00:17:37.299 Starting 1 thread 00:17:55.384 00:17:55.384 test: (groupid=0, jobs=1): err= 0: pid=78161: Fri Jul 26 14:23:14 2024 00:17:55.384 read: IOPS=6176, BW=24.1MiB/s (25.3MB/s)(255MiB/10557msec) 00:17:55.384 slat (nsec): min=4261, max=63966, avg=6909.09, stdev=2914.22 00:17:55.384 clat (usec): min=1036, max=40667, avg=20712.28, stdev=1175.80 00:17:55.384 lat (usec): min=1056, max=40675, avg=20719.18, stdev=1175.76 00:17:55.384 clat percentiles (usec): 00:17:55.384 | 1.00th=[19268], 5.00th=[19530], 10.00th=[19792], 20.00th=[20055], 00:17:55.384 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20579], 60.00th=[20841], 00:17:55.384 | 70.00th=[21103], 80.00th=[21103], 90.00th=[21365], 95.00th=[21890], 00:17:55.384 | 99.00th=[24249], 99.50th=[27919], 99.90th=[30540], 99.95th=[35914], 00:17:55.384 | 99.99th=[40109] 00:17:55.384 write: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(256MiB/5518msec); 0 zone resets 00:17:55.384 slat (usec): min=5, max=186, avg= 9.72, stdev= 5.84 00:17:55.384 clat (usec): min=686, max=63748, avg=10718.94, stdev=13434.87 00:17:55.384 lat (usec): min=694, max=63754, avg=10728.66, stdev=13434.92 00:17:55.384 clat percentiles (usec): 00:17:55.384 | 1.00th=[ 955], 5.00th=[ 1156], 10.00th=[ 1270], 20.00th=[ 1418], 00:17:55.384 | 30.00th=[ 1598], 40.00th=[ 1991], 50.00th=[ 7242], 60.00th=[ 8225], 00:17:55.384 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[39584], 95.00th=[41681], 00:17:55.384 | 99.00th=[44827], 99.50th=[46924], 99.90th=[48497], 99.95th=[52691], 00:17:55.384 | 99.99th=[61080] 00:17:55.384 bw ( KiB/s): min= 1016, max=61984, per=91.95%, avg=43682.17, stdev=15832.70, samples=12 00:17:55.384 iops : min= 254, max=15496, avg=10920.50, stdev=3958.15, samples=12 00:17:55.384 lat (usec) : 750=0.02%, 1000=0.78% 00:17:55.384 lat (msec) : 2=19.30%, 4=0.86%, 10=16.72%, 20=12.80%, 50=49.50% 00:17:55.384 lat (msec) : 100=0.04% 00:17:55.384 cpu : usr=98.35%, sys=0.81%, ctx=29, majf=0, minf=5567 00:17:55.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:55.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:55.384 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:55.384 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:55.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:55.384 00:17:55.384 Run status group 0 (all jobs): 00:17:55.384 READ: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=255MiB (267MB), run=10557-10557msec 00:17:55.384 WRITE: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=256MiB (268MB), run=5518-5518msec 00:17:56.321 ----------------------------------------------------- 00:17:56.321 Suppressions used: 00:17:56.321 count bytes template 00:17:56.321 1 5 /usr/src/fio/parse.c 00:17:56.321 2 192 /usr/src/fio/iolog.c 00:17:56.321 1 8 libtcmalloc_minimal.so 00:17:56.321 1 904 libcrypto.so 00:17:56.321 ----------------------------------------------------- 00:17:56.321 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.321 Remove shared memory files 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61979 /dev/shm/spdk_tgt_trace.pid76425 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:56.321 ************************************ 00:17:56.321 END TEST ftl_fio_basic 00:17:56.321 ************************************ 00:17:56.321 00:17:56.321 real 1m11.999s 00:17:56.321 user 2m40.590s 00:17:56.321 sys 0m3.677s 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:56.321 14:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 14:23:15 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:56.321 14:23:15 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:56.321 14:23:15 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:56.321 14:23:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 ************************************ 00:17:56.321 START TEST ftl_bdevperf 00:17:56.321 ************************************ 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:56.321 * Looking for test storage... 00:17:56.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78410 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 78410 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 78410 ']' 00:17:56.321 14:23:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:56.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:56.321 14:23:16 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:56.321 [2024-07-26 14:23:16.079127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:56.321 [2024-07-26 14:23:16.079318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78410 ] 00:17:56.580 [2024-07-26 14:23:16.248548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.839 [2024-07-26 14:23:16.415728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:57.406 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:57.665 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:57.924 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:57.924 { 00:17:57.924 "name": "nvme0n1", 00:17:57.924 "aliases": [ 00:17:57.924 "862de7eb-7fca-4618-bc0b-d4d80b1e881a" 00:17:57.924 ], 00:17:57.924 "product_name": "NVMe disk", 00:17:57.924 "block_size": 4096, 00:17:57.924 "num_blocks": 1310720, 00:17:57.924 "uuid": "862de7eb-7fca-4618-bc0b-d4d80b1e881a", 00:17:57.924 "assigned_rate_limits": { 00:17:57.924 "rw_ios_per_sec": 0, 00:17:57.924 "rw_mbytes_per_sec": 0, 00:17:57.924 "r_mbytes_per_sec": 0, 00:17:57.924 "w_mbytes_per_sec": 0 00:17:57.924 }, 00:17:57.924 "claimed": true, 00:17:57.924 "claim_type": "read_many_write_one", 00:17:57.924 "zoned": false, 00:17:57.924 "supported_io_types": { 00:17:57.924 "read": true, 00:17:57.924 "write": true, 00:17:57.924 "unmap": true, 00:17:57.924 "flush": true, 00:17:57.924 "reset": true, 00:17:57.924 "nvme_admin": true, 00:17:57.924 "nvme_io": true, 00:17:57.924 "nvme_io_md": false, 00:17:57.924 "write_zeroes": true, 00:17:57.924 "zcopy": false, 00:17:57.924 "get_zone_info": false, 00:17:57.924 "zone_management": false, 00:17:57.924 "zone_append": false, 00:17:57.924 "compare": true, 00:17:57.924 "compare_and_write": false, 00:17:57.924 "abort": true, 00:17:57.924 "seek_hole": false, 00:17:57.924 "seek_data": false, 00:17:57.924 "copy": true, 00:17:57.924 "nvme_iov_md": false 00:17:57.924 }, 00:17:57.924 "driver_specific": { 00:17:57.924 "nvme": [ 00:17:57.924 { 00:17:57.924 "pci_address": "0000:00:11.0", 00:17:57.924 "trid": { 00:17:57.924 "trtype": "PCIe", 00:17:57.924 "traddr": "0000:00:11.0" 00:17:57.924 }, 00:17:57.924 "ctrlr_data": { 00:17:57.924 "cntlid": 0, 00:17:57.924 "vendor_id": "0x1b36", 00:17:57.924 "model_number": "QEMU NVMe Ctrl", 00:17:57.924 "serial_number": "12341", 00:17:57.924 "firmware_revision": "8.0.0", 00:17:57.924 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:57.924 "oacs": { 00:17:57.924 "security": 0, 00:17:57.924 "format": 1, 00:17:57.924 "firmware": 0, 00:17:57.924 "ns_manage": 1 00:17:57.924 }, 00:17:57.924 "multi_ctrlr": false, 00:17:57.924 "ana_reporting": false 00:17:57.924 }, 00:17:57.924 "vs": { 00:17:57.924 "nvme_version": "1.4" 00:17:57.924 }, 00:17:57.924 "ns_data": { 00:17:57.924 "id": 1, 00:17:57.924 "can_share": false 00:17:57.924 } 00:17:57.924 } 00:17:57.924 ], 00:17:57.924 "mp_policy": "active_passive" 00:17:57.924 } 00:17:57.924 } 00:17:57.924 ]' 00:17:57.924 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:57.924 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:57.924 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5d73a282-9883-4cdd-80dd-e9e999e5ff67 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:58.183 14:23:17 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d73a282-9883-4cdd-80dd-e9e999e5ff67 00:17:58.750 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:58.750 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d 00:17:58.750 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:59.008 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.266 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:59.266 { 00:17:59.267 "name": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:17:59.267 "aliases": [ 00:17:59.267 "lvs/nvme0n1p0" 00:17:59.267 ], 00:17:59.267 "product_name": "Logical Volume", 00:17:59.267 "block_size": 4096, 00:17:59.267 "num_blocks": 26476544, 00:17:59.267 "uuid": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:17:59.267 "assigned_rate_limits": { 00:17:59.267 "rw_ios_per_sec": 0, 00:17:59.267 "rw_mbytes_per_sec": 0, 00:17:59.267 "r_mbytes_per_sec": 0, 00:17:59.267 "w_mbytes_per_sec": 0 00:17:59.267 }, 00:17:59.267 "claimed": false, 00:17:59.267 "zoned": false, 00:17:59.267 "supported_io_types": { 00:17:59.267 "read": true, 00:17:59.267 "write": true, 00:17:59.267 "unmap": true, 00:17:59.267 "flush": false, 00:17:59.267 "reset": true, 00:17:59.267 "nvme_admin": false, 00:17:59.267 "nvme_io": false, 00:17:59.267 "nvme_io_md": false, 00:17:59.267 "write_zeroes": true, 00:17:59.267 "zcopy": false, 00:17:59.267 "get_zone_info": false, 00:17:59.267 "zone_management": false, 00:17:59.267 "zone_append": false, 00:17:59.267 "compare": false, 00:17:59.267 "compare_and_write": false, 00:17:59.267 "abort": false, 00:17:59.267 "seek_hole": true, 00:17:59.267 "seek_data": true, 00:17:59.267 "copy": false, 00:17:59.267 "nvme_iov_md": false 00:17:59.267 }, 00:17:59.267 "driver_specific": { 00:17:59.267 "lvol": { 00:17:59.267 "lvol_store_uuid": "d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d", 00:17:59.267 "base_bdev": "nvme0n1", 00:17:59.267 "thin_provision": true, 00:17:59.267 "num_allocated_clusters": 0, 00:17:59.267 "snapshot": false, 00:17:59.267 "clone": false, 00:17:59.267 "esnap_clone": false 00:17:59.267 } 00:17:59.267 } 00:17:59.267 } 00:17:59.267 ]' 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:59.267 14:23:18 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:59.267 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:17:59.858 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:59.858 { 00:17:59.858 "name": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:17:59.858 "aliases": [ 00:17:59.858 "lvs/nvme0n1p0" 00:17:59.858 ], 00:17:59.858 "product_name": "Logical Volume", 00:17:59.858 "block_size": 4096, 00:17:59.858 "num_blocks": 26476544, 00:17:59.858 "uuid": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:17:59.859 "assigned_rate_limits": { 00:17:59.859 "rw_ios_per_sec": 0, 00:17:59.859 "rw_mbytes_per_sec": 0, 00:17:59.859 "r_mbytes_per_sec": 0, 00:17:59.859 "w_mbytes_per_sec": 0 00:17:59.859 }, 00:17:59.859 "claimed": false, 00:17:59.859 "zoned": false, 00:17:59.859 "supported_io_types": { 00:17:59.859 "read": true, 00:17:59.859 "write": true, 00:17:59.859 "unmap": true, 00:17:59.859 "flush": false, 00:17:59.859 "reset": true, 00:17:59.859 "nvme_admin": false, 00:17:59.859 "nvme_io": false, 00:17:59.859 "nvme_io_md": false, 00:17:59.859 "write_zeroes": true, 00:17:59.859 "zcopy": false, 00:17:59.859 "get_zone_info": false, 00:17:59.859 "zone_management": false, 00:17:59.859 "zone_append": false, 00:17:59.859 "compare": false, 00:17:59.859 "compare_and_write": false, 00:17:59.859 "abort": false, 00:17:59.859 "seek_hole": true, 00:17:59.859 "seek_data": true, 00:17:59.859 "copy": false, 00:17:59.859 "nvme_iov_md": false 00:17:59.859 }, 00:17:59.859 "driver_specific": { 00:17:59.859 "lvol": { 00:17:59.859 "lvol_store_uuid": "d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d", 00:17:59.859 "base_bdev": "nvme0n1", 00:17:59.859 "thin_provision": true, 00:17:59.859 "num_allocated_clusters": 0, 00:17:59.859 "snapshot": false, 00:17:59.859 "clone": false, 00:17:59.859 "esnap_clone": false 00:17:59.859 } 00:17:59.859 } 00:17:59.859 } 00:17:59.859 ]' 00:17:59.859 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.117 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.117 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.117 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:00.117 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:00.117 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:00.118 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:00.118 14:23:19 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=22604692-627c-4ab1-b2f5-b5c32bfe2438 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:18:00.376 14:23:19 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22604692-627c-4ab1-b2f5-b5c32bfe2438 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.635 { 00:18:00.635 "name": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:18:00.635 "aliases": [ 00:18:00.635 "lvs/nvme0n1p0" 00:18:00.635 ], 00:18:00.635 "product_name": "Logical Volume", 00:18:00.635 "block_size": 4096, 00:18:00.635 "num_blocks": 26476544, 00:18:00.635 "uuid": "22604692-627c-4ab1-b2f5-b5c32bfe2438", 00:18:00.635 "assigned_rate_limits": { 00:18:00.635 "rw_ios_per_sec": 0, 00:18:00.635 "rw_mbytes_per_sec": 0, 00:18:00.635 "r_mbytes_per_sec": 0, 00:18:00.635 "w_mbytes_per_sec": 0 00:18:00.635 }, 00:18:00.635 "claimed": false, 00:18:00.635 "zoned": false, 00:18:00.635 "supported_io_types": { 00:18:00.635 "read": true, 00:18:00.635 "write": true, 00:18:00.635 "unmap": true, 00:18:00.635 "flush": false, 00:18:00.635 "reset": true, 00:18:00.635 "nvme_admin": false, 00:18:00.635 "nvme_io": false, 00:18:00.635 "nvme_io_md": false, 00:18:00.635 "write_zeroes": true, 00:18:00.635 "zcopy": false, 00:18:00.635 "get_zone_info": false, 00:18:00.635 "zone_management": false, 00:18:00.635 "zone_append": false, 00:18:00.635 "compare": false, 00:18:00.635 "compare_and_write": false, 00:18:00.635 "abort": false, 00:18:00.635 "seek_hole": true, 00:18:00.635 "seek_data": true, 00:18:00.635 "copy": false, 00:18:00.635 "nvme_iov_md": false 00:18:00.635 }, 00:18:00.635 "driver_specific": { 00:18:00.635 "lvol": { 00:18:00.635 "lvol_store_uuid": "d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d", 00:18:00.635 "base_bdev": "nvme0n1", 00:18:00.635 "thin_provision": true, 00:18:00.635 "num_allocated_clusters": 0, 00:18:00.635 "snapshot": false, 00:18:00.635 "clone": false, 00:18:00.635 "esnap_clone": false 00:18:00.635 } 00:18:00.635 } 00:18:00.635 } 00:18:00.635 ]' 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:00.635 14:23:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 22604692-627c-4ab1-b2f5-b5c32bfe2438 -c nvc0n1p0 --l2p_dram_limit 20 00:18:00.895 [2024-07-26 14:23:20.521802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.521860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:00.895 [2024-07-26 14:23:20.521901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:00.895 [2024-07-26 14:23:20.521962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.522064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.522083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.895 [2024-07-26 14:23:20.522102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:00.895 [2024-07-26 14:23:20.522113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.522143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:00.895 [2024-07-26 14:23:20.523168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:00.895 [2024-07-26 14:23:20.523210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.523225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.895 [2024-07-26 14:23:20.523240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:18:00.895 [2024-07-26 14:23:20.523266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.523406] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 02b6963d-a09a-473c-892a-21a06fd75d31 00:18:00.895 [2024-07-26 14:23:20.524528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.524568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:00.895 [2024-07-26 14:23:20.524601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:00.895 [2024-07-26 14:23:20.524613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.529434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.529479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.895 [2024-07-26 14:23:20.529511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:18:00.895 [2024-07-26 14:23:20.529523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.529626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.529648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.895 [2024-07-26 14:23:20.529660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:00.895 [2024-07-26 14:23:20.529674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.529742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.529761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:00.895 [2024-07-26 14:23:20.529772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:00.895 [2024-07-26 14:23:20.529783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.529809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.895 [2024-07-26 14:23:20.534330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.534366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.895 [2024-07-26 14:23:20.534403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.526 ms 00:18:00.895 [2024-07-26 14:23:20.534414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.534455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.534469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:00.895 [2024-07-26 14:23:20.534481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:00.895 [2024-07-26 14:23:20.534492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.534542] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:00.895 [2024-07-26 14:23:20.534683] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:00.895 [2024-07-26 14:23:20.534705] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:00.895 [2024-07-26 14:23:20.534719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:00.895 [2024-07-26 14:23:20.534734] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:00.895 [2024-07-26 14:23:20.534746] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:00.895 [2024-07-26 14:23:20.534758] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:00.895 [2024-07-26 14:23:20.534768] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:00.895 [2024-07-26 14:23:20.534781] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:00.895 [2024-07-26 14:23:20.534790] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:00.895 [2024-07-26 14:23:20.534803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.534813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:00.895 [2024-07-26 14:23:20.534828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:00.895 [2024-07-26 14:23:20.534838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.534975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.895 [2024-07-26 14:23:20.534994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:00.895 [2024-07-26 14:23:20.535008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:18:00.895 [2024-07-26 14:23:20.535019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.895 [2024-07-26 14:23:20.535124] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:00.895 [2024-07-26 14:23:20.535140] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:00.895 [2024-07-26 14:23:20.535153] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.895 [2024-07-26 14:23:20.535167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.895 [2024-07-26 14:23:20.535180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:00.895 [2024-07-26 14:23:20.535191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:00.895 [2024-07-26 14:23:20.535203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:00.895 [2024-07-26 14:23:20.535213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:00.896 [2024-07-26 14:23:20.535243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.896 [2024-07-26 14:23:20.535265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:00.896 [2024-07-26 14:23:20.535276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:00.896 [2024-07-26 14:23:20.535288] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:00.896 [2024-07-26 14:23:20.535314] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:00.896 [2024-07-26 14:23:20.535342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:00.896 [2024-07-26 14:23:20.535352] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:00.896 [2024-07-26 14:23:20.535376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535400] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:00.896 [2024-07-26 14:23:20.535422] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535432] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:00.896 [2024-07-26 14:23:20.535455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535476] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:00.896 [2024-07-26 14:23:20.535488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535497] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:00.896 [2024-07-26 14:23:20.535528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:00.896 [2024-07-26 14:23:20.535584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.896 [2024-07-26 14:23:20.535607] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:00.896 [2024-07-26 14:23:20.535617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:00.896 [2024-07-26 14:23:20.535629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:00.896 [2024-07-26 14:23:20.535639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:00.896 [2024-07-26 14:23:20.535665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:00.896 [2024-07-26 14:23:20.535675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:00.896 [2024-07-26 14:23:20.535699] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:00.896 [2024-07-26 14:23:20.535712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535722] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:00.896 [2024-07-26 14:23:20.535736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:00.896 [2024-07-26 14:23:20.535747] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535761] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:00.896 [2024-07-26 14:23:20.535772] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:00.896 [2024-07-26 14:23:20.535787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:00.896 [2024-07-26 14:23:20.535798] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:00.896 [2024-07-26 14:23:20.535810] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:00.896 [2024-07-26 14:23:20.535820] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:00.896 [2024-07-26 14:23:20.535834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:00.896 [2024-07-26 14:23:20.535863] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:00.896 [2024-07-26 14:23:20.535879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.535895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:00.896 [2024-07-26 14:23:20.535920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:00.896 [2024-07-26 14:23:20.535932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:00.896 [2024-07-26 14:23:20.535977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:00.896 [2024-07-26 14:23:20.535990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:00.896 [2024-07-26 14:23:20.536004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:00.896 [2024-07-26 14:23:20.536015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:00.896 [2024-07-26 14:23:20.536028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:00.896 [2024-07-26 14:23:20.536040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:00.896 [2024-07-26 14:23:20.536056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:00.896 [2024-07-26 14:23:20.536119] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:00.896 [2024-07-26 14:23:20.536133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:00.896 [2024-07-26 14:23:20.536159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:00.896 [2024-07-26 14:23:20.536171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:00.896 [2024-07-26 14:23:20.536186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:00.896 [2024-07-26 14:23:20.536198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.896 [2024-07-26 14:23:20.536215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:00.896 [2024-07-26 14:23:20.536227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.147 ms 00:18:00.896 [2024-07-26 14:23:20.536240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.896 [2024-07-26 14:23:20.536312] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:00.896 [2024-07-26 14:23:20.536332] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:03.430 [2024-07-26 14:23:22.639172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.639243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:03.430 [2024-07-26 14:23:22.639281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2102.871 ms 00:18:03.430 [2024-07-26 14:23:22.639294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.676220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.676299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.430 [2024-07-26 14:23:22.676319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.687 ms 00:18:03.430 [2024-07-26 14:23:22.676332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.676503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.676525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.430 [2024-07-26 14:23:22.676538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:03.430 [2024-07-26 14:23:22.676551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.709924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.710009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.430 [2024-07-26 14:23:22.710028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.296 ms 00:18:03.430 [2024-07-26 14:23:22.710040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.710085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.710102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.430 [2024-07-26 14:23:22.710114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:03.430 [2024-07-26 14:23:22.710125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.710549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.710571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.430 [2024-07-26 14:23:22.710583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:03.430 [2024-07-26 14:23:22.710595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.710744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.710763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.430 [2024-07-26 14:23:22.710776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:18:03.430 [2024-07-26 14:23:22.710805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.724922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.724960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.430 [2024-07-26 14:23:22.724992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.096 ms 00:18:03.430 [2024-07-26 14:23:22.725003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.736709] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:03.430 [2024-07-26 14:23:22.741446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.741479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.430 [2024-07-26 14:23:22.741512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.354 ms 00:18:03.430 [2024-07-26 14:23:22.741522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.798620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.798695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:03.430 [2024-07-26 14:23:22.798734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.058 ms 00:18:03.430 [2024-07-26 14:23:22.798745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.799002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.799039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:03.430 [2024-07-26 14:23:22.799056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:18:03.430 [2024-07-26 14:23:22.799068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.824814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.824870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:03.430 [2024-07-26 14:23:22.824906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.677 ms 00:18:03.430 [2024-07-26 14:23:22.824952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.850137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.850175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:03.430 [2024-07-26 14:23:22.850212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.133 ms 00:18:03.430 [2024-07-26 14:23:22.850221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.850864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.850908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:03.430 [2024-07-26 14:23:22.850927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:18:03.430 [2024-07-26 14:23:22.850939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.933289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.933358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:03.430 [2024-07-26 14:23:22.933397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.274 ms 00:18:03.430 [2024-07-26 14:23:22.933408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.430 [2024-07-26 14:23:22.962109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.430 [2024-07-26 14:23:22.962152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:03.430 [2024-07-26 14:23:22.962187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.652 ms 00:18:03.430 [2024-07-26 14:23:22.962200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.431 [2024-07-26 14:23:22.990010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.431 [2024-07-26 14:23:22.990052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:03.431 [2024-07-26 14:23:22.990086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.763 ms 00:18:03.431 [2024-07-26 14:23:22.990096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.431 [2024-07-26 14:23:23.018207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.431 [2024-07-26 14:23:23.018245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:03.431 [2024-07-26 14:23:23.018279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.065 ms 00:18:03.431 [2024-07-26 14:23:23.018290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.431 [2024-07-26 14:23:23.018339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.431 [2024-07-26 14:23:23.018357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:03.431 [2024-07-26 14:23:23.018372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:03.431 [2024-07-26 14:23:23.018382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.431 [2024-07-26 14:23:23.018499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.431 [2024-07-26 14:23:23.018517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:03.431 [2024-07-26 14:23:23.018531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:03.431 [2024-07-26 14:23:23.018545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.431 [2024-07-26 14:23:23.019743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2497.337 ms, result 0 00:18:03.431 { 00:18:03.431 "name": "ftl0", 00:18:03.431 "uuid": "02b6963d-a09a-473c-892a-21a06fd75d31" 00:18:03.431 } 00:18:03.431 14:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:03.431 14:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:18:03.431 14:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:18:03.698 14:23:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:03.698 [2024-07-26 14:23:23.440072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:03.698 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:03.698 Zero copy mechanism will not be used. 00:18:03.698 Running I/O for 4 seconds... 00:18:07.890 00:18:07.890 Latency(us) 00:18:07.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.890 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:07.890 ftl0 : 4.00 1742.17 115.69 0.00 0.00 599.04 251.35 878.78 00:18:07.890 =================================================================================================================== 00:18:07.890 Total : 1742.17 115.69 0.00 0.00 599.04 251.35 878.78 00:18:07.890 [2024-07-26 14:23:27.448868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:07.890 0 00:18:07.890 14:23:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:07.890 [2024-07-26 14:23:27.582218] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:07.890 Running I/O for 4 seconds... 00:18:12.076 00:18:12.076 Latency(us) 00:18:12.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.076 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:12.076 ftl0 : 4.03 7640.53 29.85 0.00 0.00 16687.71 329.54 31695.59 00:18:12.076 =================================================================================================================== 00:18:12.076 Total : 7640.53 29.85 0.00 0.00 16687.71 0.00 31695.59 00:18:12.076 [2024-07-26 14:23:31.618137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:12.076 0 00:18:12.076 14:23:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:12.076 [2024-07-26 14:23:31.748447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:12.076 Running I/O for 4 seconds... 00:18:16.264 00:18:16.264 Latency(us) 00:18:16.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.264 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:16.264 Verification LBA range: start 0x0 length 0x1400000 00:18:16.264 ftl0 : 4.01 5376.11 21.00 0.00 0.00 23719.48 364.92 27644.28 00:18:16.264 =================================================================================================================== 00:18:16.264 Total : 5376.11 21.00 0.00 0.00 23719.48 0.00 27644.28 00:18:16.264 0 00:18:16.264 [2024-07-26 14:23:35.779676] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:16.264 14:23:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:16.523 [2024-07-26 14:23:36.036758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-26 14:23:36.036817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:16.523 [2024-07-26 14:23:36.036855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:16.523 [2024-07-26 14:23:36.036867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-26 14:23:36.036899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:16.523 [2024-07-26 14:23:36.039960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-26 14:23:36.040011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:16.523 [2024-07-26 14:23:36.040026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.002 ms 00:18:16.523 [2024-07-26 14:23:36.040038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.523 [2024-07-26 14:23:36.041726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.523 [2024-07-26 14:23:36.041788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:16.523 [2024-07-26 14:23:36.041804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:18:16.524 [2024-07-26 14:23:36.041816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-26 14:23:36.215768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-26 14:23:36.215851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:16.524 [2024-07-26 14:23:36.215875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 173.929 ms 00:18:16.524 [2024-07-26 14:23:36.215908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-26 14:23:36.222474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-26 14:23:36.222531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:16.524 [2024-07-26 14:23:36.222547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.486 ms 00:18:16.524 [2024-07-26 14:23:36.222559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-26 14:23:36.252528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-26 14:23:36.252587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:16.524 [2024-07-26 14:23:36.252604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.872 ms 00:18:16.524 [2024-07-26 14:23:36.252616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-26 14:23:36.269608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-26 14:23:36.269669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:16.524 [2024-07-26 14:23:36.269688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.951 ms 00:18:16.524 [2024-07-26 14:23:36.269701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.524 [2024-07-26 14:23:36.269859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.524 [2024-07-26 14:23:36.269883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:16.524 [2024-07-26 14:23:36.269940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:18:16.524 [2024-07-26 14:23:36.269977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.784 [2024-07-26 14:23:36.299157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.784 [2024-07-26 14:23:36.299218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:16.784 [2024-07-26 14:23:36.299235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.157 ms 00:18:16.784 [2024-07-26 14:23:36.299247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.784 [2024-07-26 14:23:36.330771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.784 [2024-07-26 14:23:36.330823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:16.784 [2024-07-26 14:23:36.330842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.414 ms 00:18:16.784 [2024-07-26 14:23:36.330856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.784 [2024-07-26 14:23:36.361320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.784 [2024-07-26 14:23:36.361389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:16.784 [2024-07-26 14:23:36.361406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.403 ms 00:18:16.784 [2024-07-26 14:23:36.361418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.784 [2024-07-26 14:23:36.392212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.784 [2024-07-26 14:23:36.392300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:16.784 [2024-07-26 14:23:36.392318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.696 ms 00:18:16.784 [2024-07-26 14:23:36.392333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.784 [2024-07-26 14:23:36.392375] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:16.784 [2024-07-26 14:23:36.392402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.392991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:16.784 [2024-07-26 14:23:36.393262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:16.785 [2024-07-26 14:23:36.393873] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:16.785 [2024-07-26 14:23:36.393885] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 02b6963d-a09a-473c-892a-21a06fd75d31 00:18:16.785 [2024-07-26 14:23:36.393921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:16.785 [2024-07-26 14:23:36.393933] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:16.785 [2024-07-26 14:23:36.393946] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:16.785 [2024-07-26 14:23:36.393960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:16.785 [2024-07-26 14:23:36.393973] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:16.785 [2024-07-26 14:23:36.393984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:16.785 [2024-07-26 14:23:36.393997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:16.785 [2024-07-26 14:23:36.394008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:16.785 [2024-07-26 14:23:36.394036] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:16.785 [2024-07-26 14:23:36.394049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.785 [2024-07-26 14:23:36.394063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:16.785 [2024-07-26 14:23:36.394076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:18:16.785 [2024-07-26 14:23:36.394103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.409501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.785 [2024-07-26 14:23:36.409560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:16.785 [2024-07-26 14:23:36.409576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.336 ms 00:18:16.785 [2024-07-26 14:23:36.409588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.410036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.785 [2024-07-26 14:23:36.410082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:16.785 [2024-07-26 14:23:36.410098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:18:16.785 [2024-07-26 14:23:36.410111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.443975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.785 [2024-07-26 14:23:36.444040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.785 [2024-07-26 14:23:36.444070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.785 [2024-07-26 14:23:36.444085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.444146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.785 [2024-07-26 14:23:36.444162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.785 [2024-07-26 14:23:36.444173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.785 [2024-07-26 14:23:36.444184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.444275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.785 [2024-07-26 14:23:36.444301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.785 [2024-07-26 14:23:36.444312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.785 [2024-07-26 14:23:36.444323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.444343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.785 [2024-07-26 14:23:36.444357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.785 [2024-07-26 14:23:36.444367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.785 [2024-07-26 14:23:36.444378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.785 [2024-07-26 14:23:36.534439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:16.785 [2024-07-26 14:23:36.534513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.785 [2024-07-26 14:23:36.534530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:16.785 [2024-07-26 14:23:36.534545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.609991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.044 [2024-07-26 14:23:36.610106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.044 [2024-07-26 14:23:36.610275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.044 [2024-07-26 14:23:36.610367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.044 [2024-07-26 14:23:36.610518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.044 [2024-07-26 14:23:36.610609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.044 [2024-07-26 14:23:36.610685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.044 [2024-07-26 14:23:36.610761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.044 [2024-07-26 14:23:36.610772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.044 [2024-07-26 14:23:36.610783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.044 [2024-07-26 14:23:36.610911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.121 ms, result 0 00:18:17.044 true 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 78410 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 78410 ']' 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 78410 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.044 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78410 00:18:17.044 killing process with pid 78410 00:18:17.045 Received shutdown signal, test time was about 4.000000 seconds 00:18:17.045 00:18:17.045 Latency(us) 00:18:17.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.045 =================================================================================================================== 00:18:17.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.045 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.045 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.045 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78410' 00:18:17.045 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 78410 00:18:17.045 14:23:36 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 78410 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 Remove shared memory files 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:21.257 ************************************ 00:18:21.257 END TEST ftl_bdevperf 00:18:21.257 ************************************ 00:18:21.257 00:18:21.257 real 0m24.314s 00:18:21.257 user 0m27.774s 00:18:21.257 sys 0m1.040s 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.257 14:23:40 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 14:23:40 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:21.257 14:23:40 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:21.257 14:23:40 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.257 14:23:40 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.257 ************************************ 00:18:21.257 START TEST ftl_trim 00:18:21.257 ************************************ 00:18:21.257 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:21.257 * Looking for test storage... 00:18:21.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:21.257 14:23:40 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78765 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:21.258 14:23:40 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78765 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 78765 ']' 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:21.258 14:23:40 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:21.258 [2024-07-26 14:23:40.492836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:21.258 [2024-07-26 14:23:40.493002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78765 ] 00:18:21.258 [2024-07-26 14:23:40.652960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.258 [2024-07-26 14:23:40.818126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.258 [2024-07-26 14:23:40.818246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.258 [2024-07-26 14:23:40.818276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.824 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.824 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:21.824 14:23:41 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:22.082 14:23:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:22.082 14:23:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:22.082 14:23:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:22.082 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:22.082 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:22.082 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:22.082 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:22.082 14:23:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:22.647 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:22.647 { 00:18:22.647 "name": "nvme0n1", 00:18:22.647 "aliases": [ 00:18:22.647 "e06a5ac3-bbfa-4875-aa41-7be410a11d77" 00:18:22.647 ], 00:18:22.647 "product_name": "NVMe disk", 00:18:22.647 "block_size": 4096, 00:18:22.647 "num_blocks": 1310720, 00:18:22.647 "uuid": "e06a5ac3-bbfa-4875-aa41-7be410a11d77", 00:18:22.647 "assigned_rate_limits": { 00:18:22.647 "rw_ios_per_sec": 0, 00:18:22.647 "rw_mbytes_per_sec": 0, 00:18:22.647 "r_mbytes_per_sec": 0, 00:18:22.647 "w_mbytes_per_sec": 0 00:18:22.647 }, 00:18:22.647 "claimed": true, 00:18:22.647 "claim_type": "read_many_write_one", 00:18:22.647 "zoned": false, 00:18:22.647 "supported_io_types": { 00:18:22.647 "read": true, 00:18:22.647 "write": true, 00:18:22.647 "unmap": true, 00:18:22.647 "flush": true, 00:18:22.647 "reset": true, 00:18:22.647 "nvme_admin": true, 00:18:22.647 "nvme_io": true, 00:18:22.647 "nvme_io_md": false, 00:18:22.647 "write_zeroes": true, 00:18:22.647 "zcopy": false, 00:18:22.647 "get_zone_info": false, 00:18:22.647 "zone_management": false, 00:18:22.647 "zone_append": false, 00:18:22.647 "compare": true, 00:18:22.647 "compare_and_write": false, 00:18:22.647 "abort": true, 00:18:22.647 "seek_hole": false, 00:18:22.647 "seek_data": false, 00:18:22.647 "copy": true, 00:18:22.647 "nvme_iov_md": false 00:18:22.648 }, 00:18:22.648 "driver_specific": { 00:18:22.648 "nvme": [ 00:18:22.648 { 00:18:22.648 "pci_address": "0000:00:11.0", 00:18:22.648 "trid": { 00:18:22.648 "trtype": "PCIe", 00:18:22.648 "traddr": "0000:00:11.0" 00:18:22.648 }, 00:18:22.648 "ctrlr_data": { 00:18:22.648 "cntlid": 0, 00:18:22.648 "vendor_id": "0x1b36", 00:18:22.648 "model_number": "QEMU NVMe Ctrl", 00:18:22.648 "serial_number": "12341", 00:18:22.648 "firmware_revision": "8.0.0", 00:18:22.648 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:22.648 "oacs": { 00:18:22.648 "security": 0, 00:18:22.648 "format": 1, 00:18:22.648 "firmware": 0, 00:18:22.648 "ns_manage": 1 00:18:22.648 }, 00:18:22.648 "multi_ctrlr": false, 00:18:22.648 "ana_reporting": false 00:18:22.648 }, 00:18:22.648 "vs": { 00:18:22.648 "nvme_version": "1.4" 00:18:22.648 }, 00:18:22.648 "ns_data": { 00:18:22.648 "id": 1, 00:18:22.648 "can_share": false 00:18:22.648 } 00:18:22.648 } 00:18:22.648 ], 00:18:22.648 "mp_policy": "active_passive" 00:18:22.648 } 00:18:22.648 } 00:18:22.648 ]' 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:22.648 14:23:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:18:22.648 14:23:42 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:22.648 14:23:42 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:22.648 14:23:42 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:22.648 14:23:42 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:22.648 14:23:42 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:22.906 14:23:42 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d 00:18:22.906 14:23:42 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:22.906 14:23:42 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d6d0cb26-ac4b-4bf7-b7d2-40657c8ba05d 00:18:23.164 14:23:42 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:23.422 14:23:42 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b9b29176-b228-4661-95f8-1f3561874bbd 00:18:23.422 14:23:42 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b9b29176-b228-4661-95f8-1f3561874bbd 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:23.679 14:23:43 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.679 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.679 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:23.679 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:23.679 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:23.679 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:23.935 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:23.936 { 00:18:23.936 "name": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:23.936 "aliases": [ 00:18:23.936 "lvs/nvme0n1p0" 00:18:23.936 ], 00:18:23.936 "product_name": "Logical Volume", 00:18:23.936 "block_size": 4096, 00:18:23.936 "num_blocks": 26476544, 00:18:23.936 "uuid": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:23.936 "assigned_rate_limits": { 00:18:23.936 "rw_ios_per_sec": 0, 00:18:23.936 "rw_mbytes_per_sec": 0, 00:18:23.936 "r_mbytes_per_sec": 0, 00:18:23.936 "w_mbytes_per_sec": 0 00:18:23.936 }, 00:18:23.936 "claimed": false, 00:18:23.936 "zoned": false, 00:18:23.936 "supported_io_types": { 00:18:23.936 "read": true, 00:18:23.936 "write": true, 00:18:23.936 "unmap": true, 00:18:23.936 "flush": false, 00:18:23.936 "reset": true, 00:18:23.936 "nvme_admin": false, 00:18:23.936 "nvme_io": false, 00:18:23.936 "nvme_io_md": false, 00:18:23.936 "write_zeroes": true, 00:18:23.936 "zcopy": false, 00:18:23.936 "get_zone_info": false, 00:18:23.936 "zone_management": false, 00:18:23.936 "zone_append": false, 00:18:23.936 "compare": false, 00:18:23.936 "compare_and_write": false, 00:18:23.936 "abort": false, 00:18:23.936 "seek_hole": true, 00:18:23.936 "seek_data": true, 00:18:23.936 "copy": false, 00:18:23.936 "nvme_iov_md": false 00:18:23.936 }, 00:18:23.936 "driver_specific": { 00:18:23.936 "lvol": { 00:18:23.936 "lvol_store_uuid": "b9b29176-b228-4661-95f8-1f3561874bbd", 00:18:23.936 "base_bdev": "nvme0n1", 00:18:23.936 "thin_provision": true, 00:18:23.936 "num_allocated_clusters": 0, 00:18:23.936 "snapshot": false, 00:18:23.936 "clone": false, 00:18:23.936 "esnap_clone": false 00:18:23.936 } 00:18:23.936 } 00:18:23.936 } 00:18:23.936 ]' 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:23.936 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:23.936 14:23:43 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:23.936 14:23:43 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:23.936 14:23:43 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:24.193 14:23:43 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:24.193 14:23:43 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:24.193 14:23:43 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.193 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.193 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.193 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:24.193 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:24.193 14:23:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.450 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.450 { 00:18:24.450 "name": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:24.450 "aliases": [ 00:18:24.450 "lvs/nvme0n1p0" 00:18:24.450 ], 00:18:24.450 "product_name": "Logical Volume", 00:18:24.450 "block_size": 4096, 00:18:24.450 "num_blocks": 26476544, 00:18:24.450 "uuid": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:24.450 "assigned_rate_limits": { 00:18:24.450 "rw_ios_per_sec": 0, 00:18:24.450 "rw_mbytes_per_sec": 0, 00:18:24.450 "r_mbytes_per_sec": 0, 00:18:24.450 "w_mbytes_per_sec": 0 00:18:24.450 }, 00:18:24.450 "claimed": false, 00:18:24.450 "zoned": false, 00:18:24.450 "supported_io_types": { 00:18:24.450 "read": true, 00:18:24.450 "write": true, 00:18:24.450 "unmap": true, 00:18:24.450 "flush": false, 00:18:24.450 "reset": true, 00:18:24.450 "nvme_admin": false, 00:18:24.450 "nvme_io": false, 00:18:24.450 "nvme_io_md": false, 00:18:24.450 "write_zeroes": true, 00:18:24.450 "zcopy": false, 00:18:24.450 "get_zone_info": false, 00:18:24.450 "zone_management": false, 00:18:24.450 "zone_append": false, 00:18:24.450 "compare": false, 00:18:24.450 "compare_and_write": false, 00:18:24.450 "abort": false, 00:18:24.450 "seek_hole": true, 00:18:24.450 "seek_data": true, 00:18:24.450 "copy": false, 00:18:24.450 "nvme_iov_md": false 00:18:24.450 }, 00:18:24.450 "driver_specific": { 00:18:24.450 "lvol": { 00:18:24.450 "lvol_store_uuid": "b9b29176-b228-4661-95f8-1f3561874bbd", 00:18:24.450 "base_bdev": "nvme0n1", 00:18:24.450 "thin_provision": true, 00:18:24.450 "num_allocated_clusters": 0, 00:18:24.450 "snapshot": false, 00:18:24.450 "clone": false, 00:18:24.450 "esnap_clone": false 00:18:24.450 } 00:18:24.450 } 00:18:24.450 } 00:18:24.450 ]' 00:18:24.450 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:24.708 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:24.708 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:24.708 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:24.708 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:24.708 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:24.708 14:23:44 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:24.708 14:23:44 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:24.965 14:23:44 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:24.965 14:23:44 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:24.965 14:23:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0381ae71-e54b-4f3b-9367-53bbde61410c 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:24.965 { 00:18:24.965 "name": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:24.965 "aliases": [ 00:18:24.965 "lvs/nvme0n1p0" 00:18:24.965 ], 00:18:24.965 "product_name": "Logical Volume", 00:18:24.965 "block_size": 4096, 00:18:24.965 "num_blocks": 26476544, 00:18:24.965 "uuid": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:24.965 "assigned_rate_limits": { 00:18:24.965 "rw_ios_per_sec": 0, 00:18:24.965 "rw_mbytes_per_sec": 0, 00:18:24.965 "r_mbytes_per_sec": 0, 00:18:24.965 "w_mbytes_per_sec": 0 00:18:24.965 }, 00:18:24.965 "claimed": false, 00:18:24.965 "zoned": false, 00:18:24.965 "supported_io_types": { 00:18:24.965 "read": true, 00:18:24.965 "write": true, 00:18:24.965 "unmap": true, 00:18:24.965 "flush": false, 00:18:24.965 "reset": true, 00:18:24.965 "nvme_admin": false, 00:18:24.965 "nvme_io": false, 00:18:24.965 "nvme_io_md": false, 00:18:24.965 "write_zeroes": true, 00:18:24.965 "zcopy": false, 00:18:24.965 "get_zone_info": false, 00:18:24.965 "zone_management": false, 00:18:24.965 "zone_append": false, 00:18:24.965 "compare": false, 00:18:24.965 "compare_and_write": false, 00:18:24.965 "abort": false, 00:18:24.965 "seek_hole": true, 00:18:24.965 "seek_data": true, 00:18:24.965 "copy": false, 00:18:24.965 "nvme_iov_md": false 00:18:24.965 }, 00:18:24.965 "driver_specific": { 00:18:24.965 "lvol": { 00:18:24.965 "lvol_store_uuid": "b9b29176-b228-4661-95f8-1f3561874bbd", 00:18:24.965 "base_bdev": "nvme0n1", 00:18:24.965 "thin_provision": true, 00:18:24.965 "num_allocated_clusters": 0, 00:18:24.965 "snapshot": false, 00:18:24.965 "clone": false, 00:18:24.965 "esnap_clone": false 00:18:24.965 } 00:18:24.965 } 00:18:24.965 } 00:18:24.965 ]' 00:18:24.965 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:25.222 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:18:25.222 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:25.222 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:25.222 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:25.222 14:23:44 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:18:25.222 14:23:44 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:25.222 14:23:44 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0381ae71-e54b-4f3b-9367-53bbde61410c -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:25.481 [2024-07-26 14:23:45.033362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.033442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:25.481 [2024-07-26 14:23:45.033463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:25.481 [2024-07-26 14:23:45.033479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.036840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.036887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:25.481 [2024-07-26 14:23:45.036931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.327 ms 00:18:25.481 [2024-07-26 14:23:45.036946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.037105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:25.481 [2024-07-26 14:23:45.038037] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:25.481 [2024-07-26 14:23:45.038079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.038100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:25.481 [2024-07-26 14:23:45.038114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:18:25.481 [2024-07-26 14:23:45.038143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.038373] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:18:25.481 [2024-07-26 14:23:45.039453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.039491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:25.481 [2024-07-26 14:23:45.039526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:25.481 [2024-07-26 14:23:45.039564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.044265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.044309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:25.481 [2024-07-26 14:23:45.044344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:18:25.481 [2024-07-26 14:23:45.044355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.044525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.044547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:25.481 [2024-07-26 14:23:45.044562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:25.481 [2024-07-26 14:23:45.044573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.044627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.044644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:25.481 [2024-07-26 14:23:45.044657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:25.481 [2024-07-26 14:23:45.044668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.044715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:25.481 [2024-07-26 14:23:45.049110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.049168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:25.481 [2024-07-26 14:23:45.049184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.406 ms 00:18:25.481 [2024-07-26 14:23:45.049197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.049294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.049317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:25.481 [2024-07-26 14:23:45.049330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:25.481 [2024-07-26 14:23:45.049342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.049377] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:25.481 [2024-07-26 14:23:45.049523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:25.481 [2024-07-26 14:23:45.049541] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:25.481 [2024-07-26 14:23:45.049559] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:25.481 [2024-07-26 14:23:45.049573] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:25.481 [2024-07-26 14:23:45.049588] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:25.481 [2024-07-26 14:23:45.049602] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:25.481 [2024-07-26 14:23:45.049616] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:25.481 [2024-07-26 14:23:45.049627] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:25.481 [2024-07-26 14:23:45.049661] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:25.481 [2024-07-26 14:23:45.049673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.049686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:25.481 [2024-07-26 14:23:45.049698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:18:25.481 [2024-07-26 14:23:45.049710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.049806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.481 [2024-07-26 14:23:45.049824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:25.481 [2024-07-26 14:23:45.049836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:25.481 [2024-07-26 14:23:45.049851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.481 [2024-07-26 14:23:45.050003] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:25.481 [2024-07-26 14:23:45.050029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:25.481 [2024-07-26 14:23:45.050042] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:25.481 [2024-07-26 14:23:45.050079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:25.481 [2024-07-26 14:23:45.050112] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050124] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.481 [2024-07-26 14:23:45.050134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:25.481 [2024-07-26 14:23:45.050147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:25.481 [2024-07-26 14:23:45.050157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:25.481 [2024-07-26 14:23:45.050171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:25.481 [2024-07-26 14:23:45.050182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:25.481 [2024-07-26 14:23:45.050194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:25.481 [2024-07-26 14:23:45.050217] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:25.481 [2024-07-26 14:23:45.050249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:25.481 [2024-07-26 14:23:45.050298] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050322] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:25.481 [2024-07-26 14:23:45.050332] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050343] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:25.481 [2024-07-26 14:23:45.050365] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050374] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050386] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:25.481 [2024-07-26 14:23:45.050396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.481 [2024-07-26 14:23:45.050419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:25.481 [2024-07-26 14:23:45.050431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:25.481 [2024-07-26 14:23:45.050440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:25.481 [2024-07-26 14:23:45.050452] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:25.481 [2024-07-26 14:23:45.050461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:25.481 [2024-07-26 14:23:45.050474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:25.481 [2024-07-26 14:23:45.050496] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:25.481 [2024-07-26 14:23:45.050505] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050517] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:25.481 [2024-07-26 14:23:45.050528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:25.481 [2024-07-26 14:23:45.050541] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050551] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:25.481 [2024-07-26 14:23:45.050567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:25.481 [2024-07-26 14:23:45.050577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:25.481 [2024-07-26 14:23:45.050606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:25.481 [2024-07-26 14:23:45.050617] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:25.481 [2024-07-26 14:23:45.050628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:25.481 [2024-07-26 14:23:45.050638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:25.481 [2024-07-26 14:23:45.050655] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:25.481 [2024-07-26 14:23:45.050668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.481 [2024-07-26 14:23:45.050683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:25.481 [2024-07-26 14:23:45.050694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:25.481 [2024-07-26 14:23:45.050707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:25.481 [2024-07-26 14:23:45.050718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:25.482 [2024-07-26 14:23:45.050730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:25.482 [2024-07-26 14:23:45.050741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:25.482 [2024-07-26 14:23:45.050754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:25.482 [2024-07-26 14:23:45.050765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:25.482 [2024-07-26 14:23:45.050779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:25.482 [2024-07-26 14:23:45.050790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:25.482 [2024-07-26 14:23:45.050852] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:25.482 [2024-07-26 14:23:45.050864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:25.482 [2024-07-26 14:23:45.050889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:25.482 [2024-07-26 14:23:45.050902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:25.482 [2024-07-26 14:23:45.050930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:25.482 [2024-07-26 14:23:45.050945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:25.482 [2024-07-26 14:23:45.050990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:25.482 [2024-07-26 14:23:45.051023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:18:25.482 [2024-07-26 14:23:45.051035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:25.482 [2024-07-26 14:23:45.051127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:25.482 [2024-07-26 14:23:45.051145] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:27.381 [2024-07-26 14:23:47.116484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.381 [2024-07-26 14:23:47.116557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:27.381 [2024-07-26 14:23:47.116597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2065.359 ms 00:18:27.381 [2024-07-26 14:23:47.116609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.147040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.147101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:27.639 [2024-07-26 14:23:47.147141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.146 ms 00:18:27.639 [2024-07-26 14:23:47.147153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.147334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.147353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:27.639 [2024-07-26 14:23:47.147372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:27.639 [2024-07-26 14:23:47.147383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.198929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.198998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:27.639 [2024-07-26 14:23:47.199029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.499 ms 00:18:27.639 [2024-07-26 14:23:47.199045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.199234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.199260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:27.639 [2024-07-26 14:23:47.199283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.639 [2024-07-26 14:23:47.199299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.199737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.199761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:27.639 [2024-07-26 14:23:47.199780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:27.639 [2024-07-26 14:23:47.199795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.200044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.200067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:27.639 [2024-07-26 14:23:47.200087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:18:27.639 [2024-07-26 14:23:47.200101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.217664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.217713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:27.639 [2024-07-26 14:23:47.217750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.510 ms 00:18:27.639 [2024-07-26 14:23:47.217761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.230163] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:27.639 [2024-07-26 14:23:47.244379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.244471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:27.639 [2024-07-26 14:23:47.244492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 00:18:27.639 [2024-07-26 14:23:47.244506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.639 [2024-07-26 14:23:47.305111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.639 [2024-07-26 14:23:47.305452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:27.639 [2024-07-26 14:23:47.305585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.438 ms 00:18:27.640 [2024-07-26 14:23:47.305615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.640 [2024-07-26 14:23:47.305998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.640 [2024-07-26 14:23:47.306026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:27.640 [2024-07-26 14:23:47.306040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:27.640 [2024-07-26 14:23:47.306057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.640 [2024-07-26 14:23:47.334467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.640 [2024-07-26 14:23:47.334530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:27.640 [2024-07-26 14:23:47.334549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.338 ms 00:18:27.640 [2024-07-26 14:23:47.334562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.640 [2024-07-26 14:23:47.362397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.640 [2024-07-26 14:23:47.362460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:27.640 [2024-07-26 14:23:47.362479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.741 ms 00:18:27.640 [2024-07-26 14:23:47.362509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.640 [2024-07-26 14:23:47.363374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.640 [2024-07-26 14:23:47.363437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:27.640 [2024-07-26 14:23:47.363469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:18:27.640 [2024-07-26 14:23:47.363495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.449305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.449389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:27.899 [2024-07-26 14:23:47.449411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.770 ms 00:18:27.899 [2024-07-26 14:23:47.449427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.478075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.478121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:27.899 [2024-07-26 14:23:47.478158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.554 ms 00:18:27.899 [2024-07-26 14:23:47.478172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.508087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.508152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:27.899 [2024-07-26 14:23:47.508170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.824 ms 00:18:27.899 [2024-07-26 14:23:47.508184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.540478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.540541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:27.899 [2024-07-26 14:23:47.540575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.195 ms 00:18:27.899 [2024-07-26 14:23:47.540607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.540714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.540740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:27.899 [2024-07-26 14:23:47.540756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:27.899 [2024-07-26 14:23:47.540773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.540867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.899 [2024-07-26 14:23:47.540888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:27.899 [2024-07-26 14:23:47.540925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:27.899 [2024-07-26 14:23:47.540977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.899 [2024-07-26 14:23:47.542146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:27.899 [2024-07-26 14:23:47.546369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2508.312 ms, result 0 00:18:27.899 [2024-07-26 14:23:47.547350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:27.899 { 00:18:27.899 "name": "ftl0", 00:18:27.899 "uuid": "49cb4bb5-d6b9-48f8-b17f-5687a932e782" 00:18:27.899 } 00:18:27.899 14:23:47 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:27.899 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:28.158 14:23:47 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:28.417 [ 00:18:28.417 { 00:18:28.417 "name": "ftl0", 00:18:28.417 "aliases": [ 00:18:28.417 "49cb4bb5-d6b9-48f8-b17f-5687a932e782" 00:18:28.417 ], 00:18:28.417 "product_name": "FTL disk", 00:18:28.417 "block_size": 4096, 00:18:28.417 "num_blocks": 23592960, 00:18:28.417 "uuid": "49cb4bb5-d6b9-48f8-b17f-5687a932e782", 00:18:28.417 "assigned_rate_limits": { 00:18:28.417 "rw_ios_per_sec": 0, 00:18:28.417 "rw_mbytes_per_sec": 0, 00:18:28.417 "r_mbytes_per_sec": 0, 00:18:28.417 "w_mbytes_per_sec": 0 00:18:28.417 }, 00:18:28.417 "claimed": false, 00:18:28.417 "zoned": false, 00:18:28.417 "supported_io_types": { 00:18:28.417 "read": true, 00:18:28.417 "write": true, 00:18:28.417 "unmap": true, 00:18:28.417 "flush": true, 00:18:28.417 "reset": false, 00:18:28.417 "nvme_admin": false, 00:18:28.417 "nvme_io": false, 00:18:28.417 "nvme_io_md": false, 00:18:28.417 "write_zeroes": true, 00:18:28.417 "zcopy": false, 00:18:28.417 "get_zone_info": false, 00:18:28.417 "zone_management": false, 00:18:28.417 "zone_append": false, 00:18:28.417 "compare": false, 00:18:28.417 "compare_and_write": false, 00:18:28.417 "abort": false, 00:18:28.417 "seek_hole": false, 00:18:28.417 "seek_data": false, 00:18:28.417 "copy": false, 00:18:28.417 "nvme_iov_md": false 00:18:28.417 }, 00:18:28.417 "driver_specific": { 00:18:28.417 "ftl": { 00:18:28.417 "base_bdev": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:28.417 "cache": "nvc0n1p0" 00:18:28.417 } 00:18:28.417 } 00:18:28.417 } 00:18:28.417 ] 00:18:28.417 14:23:48 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:18:28.417 14:23:48 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:28.417 14:23:48 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:28.676 14:23:48 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:28.676 14:23:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:28.970 14:23:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:28.970 { 00:18:28.970 "name": "ftl0", 00:18:28.970 "aliases": [ 00:18:28.970 "49cb4bb5-d6b9-48f8-b17f-5687a932e782" 00:18:28.970 ], 00:18:28.970 "product_name": "FTL disk", 00:18:28.970 "block_size": 4096, 00:18:28.970 "num_blocks": 23592960, 00:18:28.970 "uuid": "49cb4bb5-d6b9-48f8-b17f-5687a932e782", 00:18:28.970 "assigned_rate_limits": { 00:18:28.970 "rw_ios_per_sec": 0, 00:18:28.970 "rw_mbytes_per_sec": 0, 00:18:28.970 "r_mbytes_per_sec": 0, 00:18:28.970 "w_mbytes_per_sec": 0 00:18:28.970 }, 00:18:28.970 "claimed": false, 00:18:28.970 "zoned": false, 00:18:28.970 "supported_io_types": { 00:18:28.970 "read": true, 00:18:28.970 "write": true, 00:18:28.970 "unmap": true, 00:18:28.970 "flush": true, 00:18:28.970 "reset": false, 00:18:28.970 "nvme_admin": false, 00:18:28.970 "nvme_io": false, 00:18:28.970 "nvme_io_md": false, 00:18:28.970 "write_zeroes": true, 00:18:28.970 "zcopy": false, 00:18:28.970 "get_zone_info": false, 00:18:28.970 "zone_management": false, 00:18:28.970 "zone_append": false, 00:18:28.970 "compare": false, 00:18:28.970 "compare_and_write": false, 00:18:28.970 "abort": false, 00:18:28.970 "seek_hole": false, 00:18:28.970 "seek_data": false, 00:18:28.970 "copy": false, 00:18:28.970 "nvme_iov_md": false 00:18:28.970 }, 00:18:28.970 "driver_specific": { 00:18:28.970 "ftl": { 00:18:28.970 "base_bdev": "0381ae71-e54b-4f3b-9367-53bbde61410c", 00:18:28.970 "cache": "nvc0n1p0" 00:18:28.970 } 00:18:28.970 } 00:18:28.970 } 00:18:28.970 ]' 00:18:28.970 14:23:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:28.970 14:23:48 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:28.970 14:23:48 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:29.250 [2024-07-26 14:23:48.903407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.903467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:29.250 [2024-07-26 14:23:48.903507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:29.250 [2024-07-26 14:23:48.903520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.903610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:29.250 [2024-07-26 14:23:48.906976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.907026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:29.250 [2024-07-26 14:23:48.907043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:18:29.250 [2024-07-26 14:23:48.907060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.907718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.907758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:29.250 [2024-07-26 14:23:48.907775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:18:29.250 [2024-07-26 14:23:48.907796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.911478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.911510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:29.250 [2024-07-26 14:23:48.911581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:18:29.250 [2024-07-26 14:23:48.911598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.918571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.918609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:29.250 [2024-07-26 14:23:48.918640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:18:29.250 [2024-07-26 14:23:48.918653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.948062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.948109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:29.250 [2024-07-26 14:23:48.948144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.302 ms 00:18:29.250 [2024-07-26 14:23:48.948160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.965830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.965900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:29.250 [2024-07-26 14:23:48.965951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.575 ms 00:18:29.250 [2024-07-26 14:23:48.965968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.966217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.966245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:29.250 [2024-07-26 14:23:48.966260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:29.250 [2024-07-26 14:23:48.966275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.250 [2024-07-26 14:23:48.995649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.250 [2024-07-26 14:23:48.995712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:29.250 [2024-07-26 14:23:48.995731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.333 ms 00:18:29.250 [2024-07-26 14:23:48.995745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.510 [2024-07-26 14:23:49.024429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.510 [2024-07-26 14:23:49.024488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:29.510 [2024-07-26 14:23:49.024505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.584 ms 00:18:29.510 [2024-07-26 14:23:49.024520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.510 [2024-07-26 14:23:49.052736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.510 [2024-07-26 14:23:49.052787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:29.510 [2024-07-26 14:23:49.052822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.111 ms 00:18:29.510 [2024-07-26 14:23:49.052836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.510 [2024-07-26 14:23:49.084418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.510 [2024-07-26 14:23:49.084507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:29.510 [2024-07-26 14:23:49.084528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.346 ms 00:18:29.510 [2024-07-26 14:23:49.084558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.510 [2024-07-26 14:23:49.084663] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:29.510 [2024-07-26 14:23:49.084695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.084988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:29.510 [2024-07-26 14:23:49.085474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.085993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:29.511 [2024-07-26 14:23:49.086206] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:29.511 [2024-07-26 14:23:49.086218] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:18:29.511 [2024-07-26 14:23:49.086235] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:29.511 [2024-07-26 14:23:49.086249] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:29.511 [2024-07-26 14:23:49.086263] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:29.511 [2024-07-26 14:23:49.086290] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:29.511 [2024-07-26 14:23:49.086317] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:29.511 [2024-07-26 14:23:49.086333] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:29.511 [2024-07-26 14:23:49.086363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:29.511 [2024-07-26 14:23:49.086373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:29.511 [2024-07-26 14:23:49.086385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:29.511 [2024-07-26 14:23:49.086397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.511 [2024-07-26 14:23:49.086410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:29.511 [2024-07-26 14:23:49.086423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:18:29.511 [2024-07-26 14:23:49.086437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.103476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.511 [2024-07-26 14:23:49.103535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:29.511 [2024-07-26 14:23:49.103579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.998 ms 00:18:29.511 [2024-07-26 14:23:49.103597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.104140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.511 [2024-07-26 14:23:49.104171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:29.511 [2024-07-26 14:23:49.104187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:18:29.511 [2024-07-26 14:23:49.104201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.162719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.511 [2024-07-26 14:23:49.162803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.511 [2024-07-26 14:23:49.162822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.511 [2024-07-26 14:23:49.162836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.163073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.511 [2024-07-26 14:23:49.163098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.511 [2024-07-26 14:23:49.163112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.511 [2024-07-26 14:23:49.163125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.163214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.511 [2024-07-26 14:23:49.163239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.511 [2024-07-26 14:23:49.163269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.511 [2024-07-26 14:23:49.163286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.511 [2024-07-26 14:23:49.163326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.511 [2024-07-26 14:23:49.163343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.512 [2024-07-26 14:23:49.163356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.512 [2024-07-26 14:23:49.163371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.512 [2024-07-26 14:23:49.260881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.512 [2024-07-26 14:23:49.260973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.512 [2024-07-26 14:23:49.260993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.512 [2024-07-26 14:23:49.261007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.770 [2024-07-26 14:23:49.338327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.770 [2024-07-26 14:23:49.338414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.770 [2024-07-26 14:23:49.338433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.770 [2024-07-26 14:23:49.338446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.770 [2024-07-26 14:23:49.338596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.770 [2024-07-26 14:23:49.338625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.770 [2024-07-26 14:23:49.338638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.770 [2024-07-26 14:23:49.338653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.770 [2024-07-26 14:23:49.338710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.770 [2024-07-26 14:23:49.338728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.770 [2024-07-26 14:23:49.338740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.770 [2024-07-26 14:23:49.338752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.770 [2024-07-26 14:23:49.338883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.770 [2024-07-26 14:23:49.338960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.770 [2024-07-26 14:23:49.339011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.770 [2024-07-26 14:23:49.339027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.770 [2024-07-26 14:23:49.339107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.771 [2024-07-26 14:23:49.339131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:29.771 [2024-07-26 14:23:49.339144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.771 [2024-07-26 14:23:49.339158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.771 [2024-07-26 14:23:49.339221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.771 [2024-07-26 14:23:49.339259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.771 [2024-07-26 14:23:49.339274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.771 [2024-07-26 14:23:49.339290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.771 [2024-07-26 14:23:49.339384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:29.771 [2024-07-26 14:23:49.339404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.771 [2024-07-26 14:23:49.339417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:29.771 [2024-07-26 14:23:49.339429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.771 [2024-07-26 14:23:49.339663] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.243 ms, result 0 00:18:29.771 true 00:18:29.771 14:23:49 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78765 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 78765 ']' 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 78765 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78765 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.771 killing process with pid 78765 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78765' 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 78765 00:18:29.771 14:23:49 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 78765 00:18:35.038 14:23:53 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:35.296 65536+0 records in 00:18:35.296 65536+0 records out 00:18:35.296 268435456 bytes (268 MB, 256 MiB) copied, 1.07974 s, 249 MB/s 00:18:35.296 14:23:54 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:35.296 [2024-07-26 14:23:54.976679] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:35.296 [2024-07-26 14:23:54.976825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78958 ] 00:18:35.554 [2024-07-26 14:23:55.135328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.554 [2024-07-26 14:23:55.308431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.123 [2024-07-26 14:23:55.594834] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.123 [2024-07-26 14:23:55.594961] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:36.123 [2024-07-26 14:23:55.753532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.753598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:36.123 [2024-07-26 14:23:55.753634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:36.123 [2024-07-26 14:23:55.753645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.756614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.756654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:36.123 [2024-07-26 14:23:55.756686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.942 ms 00:18:36.123 [2024-07-26 14:23:55.756696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.756824] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:36.123 [2024-07-26 14:23:55.757750] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:36.123 [2024-07-26 14:23:55.757792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.757807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:36.123 [2024-07-26 14:23:55.757820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:18:36.123 [2024-07-26 14:23:55.757831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.759228] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:36.123 [2024-07-26 14:23:55.772951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.773002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:36.123 [2024-07-26 14:23:55.773040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.723 ms 00:18:36.123 [2024-07-26 14:23:55.773052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.773172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.773193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:36.123 [2024-07-26 14:23:55.773205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:36.123 [2024-07-26 14:23:55.773216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.777966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.778007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:36.123 [2024-07-26 14:23:55.778022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:18:36.123 [2024-07-26 14:23:55.778033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.778148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.778168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:36.123 [2024-07-26 14:23:55.778181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:36.123 [2024-07-26 14:23:55.778191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.778243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.778259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:36.123 [2024-07-26 14:23:55.778274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:36.123 [2024-07-26 14:23:55.778284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.778314] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:36.123 [2024-07-26 14:23:55.782131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.782166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:36.123 [2024-07-26 14:23:55.782197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:18:36.123 [2024-07-26 14:23:55.782207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.782271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.782288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:36.123 [2024-07-26 14:23:55.782299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:36.123 [2024-07-26 14:23:55.782309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.782333] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:36.123 [2024-07-26 14:23:55.782358] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:36.123 [2024-07-26 14:23:55.782398] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:36.123 [2024-07-26 14:23:55.782417] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:36.123 [2024-07-26 14:23:55.782505] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:36.123 [2024-07-26 14:23:55.782519] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:36.123 [2024-07-26 14:23:55.782532] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:36.123 [2024-07-26 14:23:55.782544] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:36.123 [2024-07-26 14:23:55.782556] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:36.123 [2024-07-26 14:23:55.782572] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:36.123 [2024-07-26 14:23:55.782582] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:36.123 [2024-07-26 14:23:55.782592] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:36.123 [2024-07-26 14:23:55.782602] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:36.123 [2024-07-26 14:23:55.782612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.782622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:36.123 [2024-07-26 14:23:55.782633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:18:36.123 [2024-07-26 14:23:55.782643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.782725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.123 [2024-07-26 14:23:55.782739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:36.123 [2024-07-26 14:23:55.782754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:36.123 [2024-07-26 14:23:55.782764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.123 [2024-07-26 14:23:55.782853] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:36.123 [2024-07-26 14:23:55.782868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:36.123 [2024-07-26 14:23:55.782879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.123 [2024-07-26 14:23:55.782890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.123 [2024-07-26 14:23:55.782900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:36.123 [2024-07-26 14:23:55.782928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:36.123 [2024-07-26 14:23:55.782958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:36.123 [2024-07-26 14:23:55.782970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:36.123 [2024-07-26 14:23:55.782981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:36.123 [2024-07-26 14:23:55.782990] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.123 [2024-07-26 14:23:55.783000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:36.123 [2024-07-26 14:23:55.783010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:36.123 [2024-07-26 14:23:55.783019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:36.123 [2024-07-26 14:23:55.783030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:36.123 [2024-07-26 14:23:55.783040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:36.123 [2024-07-26 14:23:55.783049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.123 [2024-07-26 14:23:55.783059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:36.123 [2024-07-26 14:23:55.783068] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:36.124 [2024-07-26 14:23:55.783111] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:36.124 [2024-07-26 14:23:55.783140] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:36.124 [2024-07-26 14:23:55.783168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783178] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783187] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:36.124 [2024-07-26 14:23:55.783197] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:36.124 [2024-07-26 14:23:55.783225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783235] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.124 [2024-07-26 14:23:55.783244] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:36.124 [2024-07-26 14:23:55.783254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:36.124 [2024-07-26 14:23:55.783264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:36.124 [2024-07-26 14:23:55.783273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:36.124 [2024-07-26 14:23:55.783299] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:36.124 [2024-07-26 14:23:55.783308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783318] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:36.124 [2024-07-26 14:23:55.783327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:36.124 [2024-07-26 14:23:55.783337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783346] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:36.124 [2024-07-26 14:23:55.783356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:36.124 [2024-07-26 14:23:55.783368] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:36.124 [2024-07-26 14:23:55.783393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:36.124 [2024-07-26 14:23:55.783402] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:36.124 [2024-07-26 14:23:55.783412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:36.124 [2024-07-26 14:23:55.783421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:36.124 [2024-07-26 14:23:55.783430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:36.124 [2024-07-26 14:23:55.783440] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:36.124 [2024-07-26 14:23:55.783451] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:36.124 [2024-07-26 14:23:55.783463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:36.124 [2024-07-26 14:23:55.783485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:36.124 [2024-07-26 14:23:55.783495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:36.124 [2024-07-26 14:23:55.783505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:36.124 [2024-07-26 14:23:55.783515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:36.124 [2024-07-26 14:23:55.783525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:36.124 [2024-07-26 14:23:55.783536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:36.124 [2024-07-26 14:23:55.783574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:36.124 [2024-07-26 14:23:55.783601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:36.124 [2024-07-26 14:23:55.783613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:36.124 [2024-07-26 14:23:55.783673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:36.124 [2024-07-26 14:23:55.783685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:36.124 [2024-07-26 14:23:55.783710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:36.124 [2024-07-26 14:23:55.783721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:36.124 [2024-07-26 14:23:55.783732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:36.124 [2024-07-26 14:23:55.783744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.783755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:36.124 [2024-07-26 14:23:55.783766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:18:36.124 [2024-07-26 14:23:55.783777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.822371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.822433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:36.124 [2024-07-26 14:23:55.822474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.500 ms 00:18:36.124 [2024-07-26 14:23:55.822485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.822686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.822705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:36.124 [2024-07-26 14:23:55.822724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:36.124 [2024-07-26 14:23:55.822734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.854339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.854398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:36.124 [2024-07-26 14:23:55.854432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.575 ms 00:18:36.124 [2024-07-26 14:23:55.854443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.854594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.854612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:36.124 [2024-07-26 14:23:55.854624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:36.124 [2024-07-26 14:23:55.854635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.855018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.855044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:36.124 [2024-07-26 14:23:55.855065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:18:36.124 [2024-07-26 14:23:55.855082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.855352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.855385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:36.124 [2024-07-26 14:23:55.855413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:18:36.124 [2024-07-26 14:23:55.855425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.124 [2024-07-26 14:23:55.869322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.124 [2024-07-26 14:23:55.869366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:36.124 [2024-07-26 14:23:55.869398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.863 ms 00:18:36.124 [2024-07-26 14:23:55.869409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:55.883930] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:36.385 [2024-07-26 14:23:55.884018] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:36.385 [2024-07-26 14:23:55.884053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:55.884066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:36.385 [2024-07-26 14:23:55.884078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.474 ms 00:18:36.385 [2024-07-26 14:23:55.884089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:55.912710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:55.912761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:36.385 [2024-07-26 14:23:55.912795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.473 ms 00:18:36.385 [2024-07-26 14:23:55.912805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:55.928283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:55.928330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:36.385 [2024-07-26 14:23:55.928363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.303 ms 00:18:36.385 [2024-07-26 14:23:55.928374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:55.944744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:55.944792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:36.385 [2024-07-26 14:23:55.944841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.199 ms 00:18:36.385 [2024-07-26 14:23:55.944867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:55.945786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:55.945859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:36.385 [2024-07-26 14:23:55.945891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:18:36.385 [2024-07-26 14:23:55.945917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.008294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.008370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:36.385 [2024-07-26 14:23:56.008406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.310 ms 00:18:36.385 [2024-07-26 14:23:56.008417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.019620] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:36.385 [2024-07-26 14:23:56.032656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.032726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:36.385 [2024-07-26 14:23:56.032761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.081 ms 00:18:36.385 [2024-07-26 14:23:56.032772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.032970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.032999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:36.385 [2024-07-26 14:23:56.033028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:36.385 [2024-07-26 14:23:56.033046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.033134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.033159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:36.385 [2024-07-26 14:23:56.033194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:36.385 [2024-07-26 14:23:56.033227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.033297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.033336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:36.385 [2024-07-26 14:23:56.033351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:36.385 [2024-07-26 14:23:56.033371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.033436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:36.385 [2024-07-26 14:23:56.033466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.033493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:36.385 [2024-07-26 14:23:56.033533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:36.385 [2024-07-26 14:23:56.033557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.060406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.060455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:36.385 [2024-07-26 14:23:56.060493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.796 ms 00:18:36.385 [2024-07-26 14:23:56.060504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.060624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.385 [2024-07-26 14:23:56.060644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:36.385 [2024-07-26 14:23:56.060656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:36.385 [2024-07-26 14:23:56.060666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.385 [2024-07-26 14:23:56.061900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:36.385 [2024-07-26 14:23:56.065571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 307.992 ms, result 0 00:18:36.385 [2024-07-26 14:23:56.066523] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:36.385 [2024-07-26 14:23:56.080893] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:47.706  Copying: 21/256 [MB] (21 MBps) Copying: 44/256 [MB] (22 MBps) Copying: 67/256 [MB] (22 MBps) Copying: 90/256 [MB] (23 MBps) Copying: 113/256 [MB] (23 MBps) Copying: 137/256 [MB] (23 MBps) Copying: 160/256 [MB] (23 MBps) Copying: 183/256 [MB] (22 MBps) Copying: 205/256 [MB] (22 MBps) Copying: 228/256 [MB] (22 MBps) Copying: 251/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-26 14:24:07.281663] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:47.706 [2024-07-26 14:24:07.292153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.292192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.706 [2024-07-26 14:24:07.292226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:47.706 [2024-07-26 14:24:07.292238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.292265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.706 [2024-07-26 14:24:07.295055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.295088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.706 [2024-07-26 14:24:07.295117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.754 ms 00:18:47.706 [2024-07-26 14:24:07.295128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.296959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.296995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.706 [2024-07-26 14:24:07.297027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:18:47.706 [2024-07-26 14:24:07.297038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.303160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.303212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.706 [2024-07-26 14:24:07.303244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:18:47.706 [2024-07-26 14:24:07.303261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.309487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.309519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.706 [2024-07-26 14:24:07.309549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.171 ms 00:18:47.706 [2024-07-26 14:24:07.309559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.335329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.335366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.706 [2024-07-26 14:24:07.335398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:18:47.706 [2024-07-26 14:24:07.335408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.350825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.350878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.706 [2024-07-26 14:24:07.350939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.374 ms 00:18:47.706 [2024-07-26 14:24:07.350969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.351153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.351173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.706 [2024-07-26 14:24:07.351186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:47.706 [2024-07-26 14:24:07.351197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.377174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.377211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:47.706 [2024-07-26 14:24:07.377242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.955 ms 00:18:47.706 [2024-07-26 14:24:07.377252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.402479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.402515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:47.706 [2024-07-26 14:24:07.402547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.166 ms 00:18:47.706 [2024-07-26 14:24:07.402557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.427711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.427776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.706 [2024-07-26 14:24:07.427810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.109 ms 00:18:47.706 [2024-07-26 14:24:07.427821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.455184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.706 [2024-07-26 14:24:07.455223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.706 [2024-07-26 14:24:07.455255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.201 ms 00:18:47.706 [2024-07-26 14:24:07.455265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.706 [2024-07-26 14:24:07.455323] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.706 [2024-07-26 14:24:07.455347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.706 [2024-07-26 14:24:07.455661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.455980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.707 [2024-07-26 14:24:07.456706] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.707 [2024-07-26 14:24:07.456717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:18:47.707 [2024-07-26 14:24:07.456728] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.707 [2024-07-26 14:24:07.456754] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.707 [2024-07-26 14:24:07.456780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.707 [2024-07-26 14:24:07.456806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.707 [2024-07-26 14:24:07.456817] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.708 [2024-07-26 14:24:07.456828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.708 [2024-07-26 14:24:07.456854] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.708 [2024-07-26 14:24:07.456876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.708 [2024-07-26 14:24:07.456886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.708 [2024-07-26 14:24:07.456898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.708 [2024-07-26 14:24:07.456909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.708 [2024-07-26 14:24:07.456921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:18:47.708 [2024-07-26 14:24:07.456937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.473616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.967 [2024-07-26 14:24:07.473658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.967 [2024-07-26 14:24:07.473674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:18:47.967 [2024-07-26 14:24:07.473685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.474216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.967 [2024-07-26 14:24:07.474247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.967 [2024-07-26 14:24:07.474270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:18:47.967 [2024-07-26 14:24:07.474281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.509411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.509469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.967 [2024-07-26 14:24:07.509502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.509513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.509630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.509646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.967 [2024-07-26 14:24:07.509661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.509672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.509734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.509751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.967 [2024-07-26 14:24:07.509762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.509773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.509795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.509808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.967 [2024-07-26 14:24:07.509820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.509836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.598596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.598670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:47.967 [2024-07-26 14:24:07.598703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.598714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.672504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.672563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:47.967 [2024-07-26 14:24:07.672603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.672614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.672693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.672709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:47.967 [2024-07-26 14:24:07.672721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.672731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.672762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.672775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:47.967 [2024-07-26 14:24:07.672785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.672795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.672906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.672981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:47.967 [2024-07-26 14:24:07.672994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.673005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.673070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.673087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:47.967 [2024-07-26 14:24:07.673099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.673110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.673163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.673178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:47.967 [2024-07-26 14:24:07.673189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.673200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.673251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.967 [2024-07-26 14:24:07.673267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:47.967 [2024-07-26 14:24:07.673294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.967 [2024-07-26 14:24:07.673320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.967 [2024-07-26 14:24:07.673524] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 381.358 ms, result 0 00:18:49.359 00:18:49.359 00:18:49.359 14:24:08 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79098 00:18:49.359 14:24:08 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:49.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.359 14:24:08 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79098 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79098 ']' 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.359 14:24:08 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:49.359 [2024-07-26 14:24:08.860240] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:49.359 [2024-07-26 14:24:08.860430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79098 ] 00:18:49.359 [2024-07-26 14:24:09.015671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.618 [2024-07-26 14:24:09.175170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.186 14:24:09 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.186 14:24:09 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:18:50.186 14:24:09 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:50.444 [2024-07-26 14:24:10.067545] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:50.444 [2024-07-26 14:24:10.067649] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:50.704 [2024-07-26 14:24:10.243144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.243215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:50.704 [2024-07-26 14:24:10.243235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:50.704 [2024-07-26 14:24:10.243250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.246621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.246697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:50.704 [2024-07-26 14:24:10.246729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:18:50.704 [2024-07-26 14:24:10.246743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.246960] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:50.704 [2024-07-26 14:24:10.247952] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:50.704 [2024-07-26 14:24:10.247994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.248015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:50.704 [2024-07-26 14:24:10.248030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:18:50.704 [2024-07-26 14:24:10.248060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.249518] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:50.704 [2024-07-26 14:24:10.266576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.266632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:50.704 [2024-07-26 14:24:10.266693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:18:50.704 [2024-07-26 14:24:10.266739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.266857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.266879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:50.704 [2024-07-26 14:24:10.266933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:50.704 [2024-07-26 14:24:10.266950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.271644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.271695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:50.704 [2024-07-26 14:24:10.271728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.597 ms 00:18:50.704 [2024-07-26 14:24:10.271744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.271956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.271979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:50.704 [2024-07-26 14:24:10.272016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:18:50.704 [2024-07-26 14:24:10.272032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.272071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.272087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:50.704 [2024-07-26 14:24:10.272101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:50.704 [2024-07-26 14:24:10.272113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.704 [2024-07-26 14:24:10.272153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:50.704 [2024-07-26 14:24:10.276488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.704 [2024-07-26 14:24:10.276548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:50.704 [2024-07-26 14:24:10.276566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.352 ms 00:18:50.705 [2024-07-26 14:24:10.276582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.705 [2024-07-26 14:24:10.276647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.705 [2024-07-26 14:24:10.276701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:50.705 [2024-07-26 14:24:10.276717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:50.705 [2024-07-26 14:24:10.276731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.705 [2024-07-26 14:24:10.276759] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:50.705 [2024-07-26 14:24:10.276802] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:50.705 [2024-07-26 14:24:10.276849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:50.705 [2024-07-26 14:24:10.276877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:50.705 [2024-07-26 14:24:10.277006] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:50.705 [2024-07-26 14:24:10.277042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:50.705 [2024-07-26 14:24:10.277058] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:50.705 [2024-07-26 14:24:10.277075] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277105] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277121] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:50.705 [2024-07-26 14:24:10.277132] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:50.705 [2024-07-26 14:24:10.277146] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:50.705 [2024-07-26 14:24:10.277158] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:50.705 [2024-07-26 14:24:10.277175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.705 [2024-07-26 14:24:10.277187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:50.705 [2024-07-26 14:24:10.277203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:18:50.705 [2024-07-26 14:24:10.277217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.705 [2024-07-26 14:24:10.277382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.705 [2024-07-26 14:24:10.277400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:50.705 [2024-07-26 14:24:10.277416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:50.705 [2024-07-26 14:24:10.277429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.705 [2024-07-26 14:24:10.277552] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:50.705 [2024-07-26 14:24:10.277581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:50.705 [2024-07-26 14:24:10.277600] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:50.705 [2024-07-26 14:24:10.277658] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:50.705 [2024-07-26 14:24:10.277731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277742] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.705 [2024-07-26 14:24:10.277756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:50.705 [2024-07-26 14:24:10.277768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:50.705 [2024-07-26 14:24:10.277797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:50.705 [2024-07-26 14:24:10.277808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:50.705 [2024-07-26 14:24:10.277837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:50.705 [2024-07-26 14:24:10.277848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:50.705 [2024-07-26 14:24:10.277873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:50.705 [2024-07-26 14:24:10.277911] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:50.705 [2024-07-26 14:24:10.277948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:50.705 [2024-07-26 14:24:10.277964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.705 [2024-07-26 14:24:10.277975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:50.705 [2024-07-26 14:24:10.277989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278011] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.705 [2024-07-26 14:24:10.278039] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:50.705 [2024-07-26 14:24:10.278056] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:50.705 [2024-07-26 14:24:10.278082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:50.705 [2024-07-26 14:24:10.278096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278107] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.705 [2024-07-26 14:24:10.278121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:50.705 [2024-07-26 14:24:10.278146] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:50.705 [2024-07-26 14:24:10.278159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:50.705 [2024-07-26 14:24:10.278171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:50.705 [2024-07-26 14:24:10.278185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:50.705 [2024-07-26 14:24:10.278196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278211] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:50.705 [2024-07-26 14:24:10.278223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:50.705 [2024-07-26 14:24:10.278236] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278246] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:50.705 [2024-07-26 14:24:10.278261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:50.705 [2024-07-26 14:24:10.278273] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:50.705 [2024-07-26 14:24:10.278287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:50.705 [2024-07-26 14:24:10.278299] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:50.705 [2024-07-26 14:24:10.278345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:50.705 [2024-07-26 14:24:10.278371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:50.705 [2024-07-26 14:24:10.278385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:50.705 [2024-07-26 14:24:10.278397] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:50.705 [2024-07-26 14:24:10.278410] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:50.705 [2024-07-26 14:24:10.278423] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:50.705 [2024-07-26 14:24:10.278441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.705 [2024-07-26 14:24:10.278457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:50.705 [2024-07-26 14:24:10.278475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:50.705 [2024-07-26 14:24:10.278488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:50.705 [2024-07-26 14:24:10.278503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:50.705 [2024-07-26 14:24:10.278515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:50.705 [2024-07-26 14:24:10.278529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:50.705 [2024-07-26 14:24:10.278542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:50.705 [2024-07-26 14:24:10.278556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:50.705 [2024-07-26 14:24:10.278568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:50.705 [2024-07-26 14:24:10.278582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:50.705 [2024-07-26 14:24:10.278595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:50.705 [2024-07-26 14:24:10.278609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:50.706 [2024-07-26 14:24:10.278622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:50.706 [2024-07-26 14:24:10.278636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:50.706 [2024-07-26 14:24:10.278648] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:50.706 [2024-07-26 14:24:10.278664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:50.706 [2024-07-26 14:24:10.278677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:50.706 [2024-07-26 14:24:10.278724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:50.706 [2024-07-26 14:24:10.278737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:50.706 [2024-07-26 14:24:10.278751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:50.706 [2024-07-26 14:24:10.278764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.278778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:50.706 [2024-07-26 14:24:10.278791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:18:50.706 [2024-07-26 14:24:10.278808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.310402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.310654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:50.706 [2024-07-26 14:24:10.310811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.521 ms 00:18:50.706 [2024-07-26 14:24:10.310867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.311186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.311349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:50.706 [2024-07-26 14:24:10.311461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:50.706 [2024-07-26 14:24:10.311517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.345213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.345525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:50.706 [2024-07-26 14:24:10.345639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.542 ms 00:18:50.706 [2024-07-26 14:24:10.345698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.345942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.346072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:50.706 [2024-07-26 14:24:10.346173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:50.706 [2024-07-26 14:24:10.346224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.346673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.346801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:50.706 [2024-07-26 14:24:10.346958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:18:50.706 [2024-07-26 14:24:10.347014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.347186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.347239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:50.706 [2024-07-26 14:24:10.347365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:18:50.706 [2024-07-26 14:24:10.347506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.363147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.363336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:50.706 [2024-07-26 14:24:10.363452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:18:50.706 [2024-07-26 14:24:10.363511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.378006] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:50.706 [2024-07-26 14:24:10.378050] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:50.706 [2024-07-26 14:24:10.378072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.378087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:50.706 [2024-07-26 14:24:10.378100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.306 ms 00:18:50.706 [2024-07-26 14:24:10.378113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.404424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.404484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:50.706 [2024-07-26 14:24:10.404502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.219 ms 00:18:50.706 [2024-07-26 14:24:10.404520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.418411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.418467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:50.706 [2024-07-26 14:24:10.418493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.803 ms 00:18:50.706 [2024-07-26 14:24:10.418510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.432141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.432183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:50.706 [2024-07-26 14:24:10.432199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.547 ms 00:18:50.706 [2024-07-26 14:24:10.432212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.706 [2024-07-26 14:24:10.432957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.706 [2024-07-26 14:24:10.432993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:50.706 [2024-07-26 14:24:10.433016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.617 ms 00:18:50.706 [2024-07-26 14:24:10.433030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.506601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.506691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:50.965 [2024-07-26 14:24:10.506728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.540 ms 00:18:50.965 [2024-07-26 14:24:10.506742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.517696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:50.965 [2024-07-26 14:24:10.530146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.530229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:50.965 [2024-07-26 14:24:10.530270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.256 ms 00:18:50.965 [2024-07-26 14:24:10.530283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.530428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.530447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:50.965 [2024-07-26 14:24:10.530463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:50.965 [2024-07-26 14:24:10.530475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.530541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.530558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:50.965 [2024-07-26 14:24:10.530575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:50.965 [2024-07-26 14:24:10.530587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.530620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.530634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:50.965 [2024-07-26 14:24:10.530648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:50.965 [2024-07-26 14:24:10.530660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.530703] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:50.965 [2024-07-26 14:24:10.530734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.530749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:50.965 [2024-07-26 14:24:10.530761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:50.965 [2024-07-26 14:24:10.530777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.558912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.558979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:50.965 [2024-07-26 14:24:10.558998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.108 ms 00:18:50.965 [2024-07-26 14:24:10.559012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.559140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:50.965 [2024-07-26 14:24:10.559169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:50.965 [2024-07-26 14:24:10.559184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:50.965 [2024-07-26 14:24:10.559197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:50.965 [2024-07-26 14:24:10.560394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:50.965 [2024-07-26 14:24:10.564111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.760 ms, result 0 00:18:50.965 [2024-07-26 14:24:10.565215] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:50.965 Some configs were skipped because the RPC state that can call them passed over. 00:18:50.965 14:24:10 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:51.222 [2024-07-26 14:24:10.797698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.222 [2024-07-26 14:24:10.798043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:51.222 [2024-07-26 14:24:10.798172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:18:51.222 [2024-07-26 14:24:10.798224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.222 [2024-07-26 14:24:10.798398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.274 ms, result 0 00:18:51.222 true 00:18:51.222 14:24:10 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:51.479 [2024-07-26 14:24:11.049812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:51.479 [2024-07-26 14:24:11.050154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:51.479 [2024-07-26 14:24:11.050184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:18:51.479 [2024-07-26 14:24:11.050212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:51.479 [2024-07-26 14:24:11.050273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.825 ms, result 0 00:18:51.479 true 00:18:51.479 14:24:11 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79098 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79098 ']' 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79098 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79098 00:18:51.480 killing process with pid 79098 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79098' 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79098 00:18:51.480 14:24:11 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79098 00:18:52.416 [2024-07-26 14:24:11.935340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.416 [2024-07-26 14:24:11.935406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:52.416 [2024-07-26 14:24:11.935446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:52.416 [2024-07-26 14:24:11.935460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.416 [2024-07-26 14:24:11.935492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:52.416 [2024-07-26 14:24:11.938480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.416 [2024-07-26 14:24:11.938531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:52.416 [2024-07-26 14:24:11.938548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.966 ms 00:18:52.416 [2024-07-26 14:24:11.938564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.416 [2024-07-26 14:24:11.938841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.416 [2024-07-26 14:24:11.938863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:52.416 [2024-07-26 14:24:11.938877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:18:52.416 [2024-07-26 14:24:11.938890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.416 [2024-07-26 14:24:11.942990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.416 [2024-07-26 14:24:11.943081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:52.416 [2024-07-26 14:24:11.943100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:18:52.416 [2024-07-26 14:24:11.943115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.416 [2024-07-26 14:24:11.949653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.416 [2024-07-26 14:24:11.949705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:52.416 [2024-07-26 14:24:11.949721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.489 ms 00:18:52.417 [2024-07-26 14:24:11.949736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:11.960723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:11.960779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:52.417 [2024-07-26 14:24:11.960796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.935 ms 00:18:52.417 [2024-07-26 14:24:11.960810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:11.968972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:11.969018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:52.417 [2024-07-26 14:24:11.969034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.120 ms 00:18:52.417 [2024-07-26 14:24:11.969047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:11.969184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:11.969207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:52.417 [2024-07-26 14:24:11.969220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:52.417 [2024-07-26 14:24:11.969245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:11.980789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:11.980845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:52.417 [2024-07-26 14:24:11.980861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.520 ms 00:18:52.417 [2024-07-26 14:24:11.980874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:11.993711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:11.993767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:52.417 [2024-07-26 14:24:11.993784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.750 ms 00:18:52.417 [2024-07-26 14:24:11.993801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:12.005354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:12.005410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:52.417 [2024-07-26 14:24:12.005426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.512 ms 00:18:52.417 [2024-07-26 14:24:12.005440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:12.016316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.417 [2024-07-26 14:24:12.016371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:52.417 [2024-07-26 14:24:12.016387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.807 ms 00:18:52.417 [2024-07-26 14:24:12.016400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.417 [2024-07-26 14:24:12.016440] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:52.417 [2024-07-26 14:24:12.016467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.016993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:52.417 [2024-07-26 14:24:12.017282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:52.418 [2024-07-26 14:24:12.017891] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:52.418 [2024-07-26 14:24:12.017903] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:18:52.418 [2024-07-26 14:24:12.017919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:52.418 [2024-07-26 14:24:12.017931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:52.418 [2024-07-26 14:24:12.017944] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:52.418 [2024-07-26 14:24:12.017957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:52.418 [2024-07-26 14:24:12.017979] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:52.418 [2024-07-26 14:24:12.017994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:52.418 [2024-07-26 14:24:12.018008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:52.418 [2024-07-26 14:24:12.018019] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:52.418 [2024-07-26 14:24:12.018045] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:52.418 [2024-07-26 14:24:12.018057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.418 [2024-07-26 14:24:12.018071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:52.418 [2024-07-26 14:24:12.018084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:18:52.418 [2024-07-26 14:24:12.018101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.032243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.418 [2024-07-26 14:24:12.032299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:52.418 [2024-07-26 14:24:12.032316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.102 ms 00:18:52.418 [2024-07-26 14:24:12.032332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.032784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.418 [2024-07-26 14:24:12.032822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:52.418 [2024-07-26 14:24:12.032841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:18:52.418 [2024-07-26 14:24:12.032855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.078773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.418 [2024-07-26 14:24:12.078846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.418 [2024-07-26 14:24:12.078864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.418 [2024-07-26 14:24:12.078877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.079031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.418 [2024-07-26 14:24:12.079070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.418 [2024-07-26 14:24:12.079087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.418 [2024-07-26 14:24:12.079101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.079167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.418 [2024-07-26 14:24:12.079191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.418 [2024-07-26 14:24:12.079204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.418 [2024-07-26 14:24:12.079220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.079245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.418 [2024-07-26 14:24:12.079262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.418 [2024-07-26 14:24:12.079274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.418 [2024-07-26 14:24:12.079307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.418 [2024-07-26 14:24:12.161926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.418 [2024-07-26 14:24:12.162010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.418 [2024-07-26 14:24:12.162028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.418 [2024-07-26 14:24:12.162042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.232910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.677 [2024-07-26 14:24:12.233062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.677 [2024-07-26 14:24:12.233270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.677 [2024-07-26 14:24:12.233380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.677 [2024-07-26 14:24:12.233608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:52.677 [2024-07-26 14:24:12.233756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.677 [2024-07-26 14:24:12.233853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.233921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:52.677 [2024-07-26 14:24:12.233942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.677 [2024-07-26 14:24:12.233955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:52.677 [2024-07-26 14:24:12.233969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.677 [2024-07-26 14:24:12.234157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 298.797 ms, result 0 00:18:53.611 14:24:13 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:53.611 14:24:13 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:53.611 [2024-07-26 14:24:13.166983] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:53.611 [2024-07-26 14:24:13.167155] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79152 ] 00:18:53.611 [2024-07-26 14:24:13.336771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.869 [2024-07-26 14:24:13.495647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.127 [2024-07-26 14:24:13.790415] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.127 [2024-07-26 14:24:13.790509] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.387 [2024-07-26 14:24:13.948662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.948719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.387 [2024-07-26 14:24:13.948754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:54.387 [2024-07-26 14:24:13.948765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.951649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.951691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.387 [2024-07-26 14:24:13.951724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.856 ms 00:18:54.387 [2024-07-26 14:24:13.951735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.951904] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.387 [2024-07-26 14:24:13.952827] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.387 [2024-07-26 14:24:13.952881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.952947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.387 [2024-07-26 14:24:13.952964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:18:54.387 [2024-07-26 14:24:13.952976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.954255] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:54.387 [2024-07-26 14:24:13.969060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.969099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:54.387 [2024-07-26 14:24:13.969137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.807 ms 00:18:54.387 [2024-07-26 14:24:13.969148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.969256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.969277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:54.387 [2024-07-26 14:24:13.969289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:54.387 [2024-07-26 14:24:13.969299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.973569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.973607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.387 [2024-07-26 14:24:13.973636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:18:54.387 [2024-07-26 14:24:13.973647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.973784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.973804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.387 [2024-07-26 14:24:13.973816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:54.387 [2024-07-26 14:24:13.973827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.973867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.973882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.387 [2024-07-26 14:24:13.973897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:54.387 [2024-07-26 14:24:13.973968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.974006] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:54.387 [2024-07-26 14:24:13.977789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.977823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.387 [2024-07-26 14:24:13.977853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.791 ms 00:18:54.387 [2024-07-26 14:24:13.977864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.977958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.977978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.387 [2024-07-26 14:24:13.977990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:54.387 [2024-07-26 14:24:13.978000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.978046] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:54.387 [2024-07-26 14:24:13.978075] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:54.387 [2024-07-26 14:24:13.978118] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:54.387 [2024-07-26 14:24:13.978137] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:54.387 [2024-07-26 14:24:13.978229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.387 [2024-07-26 14:24:13.978244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.387 [2024-07-26 14:24:13.978257] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:54.387 [2024-07-26 14:24:13.978286] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.387 [2024-07-26 14:24:13.978299] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.387 [2024-07-26 14:24:13.978315] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:54.387 [2024-07-26 14:24:13.978341] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.387 [2024-07-26 14:24:13.978351] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.387 [2024-07-26 14:24:13.978361] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.387 [2024-07-26 14:24:13.978372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.978382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.387 [2024-07-26 14:24:13.978394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:54.387 [2024-07-26 14:24:13.978404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.978489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.387 [2024-07-26 14:24:13.978504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.387 [2024-07-26 14:24:13.978520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:54.387 [2024-07-26 14:24:13.978530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.387 [2024-07-26 14:24:13.978625] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.388 [2024-07-26 14:24:13.978642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.388 [2024-07-26 14:24:13.978654] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978665] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978676] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.388 [2024-07-26 14:24:13.978685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.388 [2024-07-26 14:24:13.978714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.388 [2024-07-26 14:24:13.978733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.388 [2024-07-26 14:24:13.978743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:54.388 [2024-07-26 14:24:13.978752] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.388 [2024-07-26 14:24:13.978761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.388 [2024-07-26 14:24:13.978771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:54.388 [2024-07-26 14:24:13.978780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.388 [2024-07-26 14:24:13.978801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.388 [2024-07-26 14:24:13.978845] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978865] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.388 [2024-07-26 14:24:13.978874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.388 [2024-07-26 14:24:13.978901] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.388 [2024-07-26 14:24:13.978930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.388 [2024-07-26 14:24:13.978949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.388 [2024-07-26 14:24:13.978975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:54.388 [2024-07-26 14:24:13.978986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.388 [2024-07-26 14:24:13.978996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.388 [2024-07-26 14:24:13.979006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:54.388 [2024-07-26 14:24:13.979015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.388 [2024-07-26 14:24:13.979024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.388 [2024-07-26 14:24:13.979034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:54.388 [2024-07-26 14:24:13.979043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.979052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.388 [2024-07-26 14:24:13.979062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:54.388 [2024-07-26 14:24:13.979071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.979080] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.388 [2024-07-26 14:24:13.979091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.388 [2024-07-26 14:24:13.979102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.388 [2024-07-26 14:24:13.979112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.388 [2024-07-26 14:24:13.979127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.388 [2024-07-26 14:24:13.979137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.388 [2024-07-26 14:24:13.979148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.388 [2024-07-26 14:24:13.979158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.388 [2024-07-26 14:24:13.979168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.388 [2024-07-26 14:24:13.979177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.388 [2024-07-26 14:24:13.979188] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.388 [2024-07-26 14:24:13.979201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:54.388 [2024-07-26 14:24:13.979223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:54.388 [2024-07-26 14:24:13.979233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:54.388 [2024-07-26 14:24:13.979244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:54.388 [2024-07-26 14:24:13.979255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:54.388 [2024-07-26 14:24:13.979265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:54.388 [2024-07-26 14:24:13.979292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:54.388 [2024-07-26 14:24:13.979302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:54.388 [2024-07-26 14:24:13.979313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:54.388 [2024-07-26 14:24:13.979323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:54.388 [2024-07-26 14:24:13.979377] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.388 [2024-07-26 14:24:13.979389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.388 [2024-07-26 14:24:13.979411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.388 [2024-07-26 14:24:13.979422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.388 [2024-07-26 14:24:13.979433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.388 [2024-07-26 14:24:13.979444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.388 [2024-07-26 14:24:13.979456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.388 [2024-07-26 14:24:13.979467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:18:54.388 [2024-07-26 14:24:13.979477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.388 [2024-07-26 14:24:14.019255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.388 [2024-07-26 14:24:14.019331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.388 [2024-07-26 14:24:14.019373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.687 ms 00:18:54.388 [2024-07-26 14:24:14.019384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.388 [2024-07-26 14:24:14.019586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.388 [2024-07-26 14:24:14.019640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:54.388 [2024-07-26 14:24:14.019662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:54.388 [2024-07-26 14:24:14.019673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.388 [2024-07-26 14:24:14.052477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.388 [2024-07-26 14:24:14.052531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.388 [2024-07-26 14:24:14.052565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.767 ms 00:18:54.388 [2024-07-26 14:24:14.052575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.388 [2024-07-26 14:24:14.052739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.388 [2024-07-26 14:24:14.052758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.389 [2024-07-26 14:24:14.052770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:54.389 [2024-07-26 14:24:14.052780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.053159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.053178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.389 [2024-07-26 14:24:14.053191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:18:54.389 [2024-07-26 14:24:14.053202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.053402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.053437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.389 [2024-07-26 14:24:14.053464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:18:54.389 [2024-07-26 14:24:14.053475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.067587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.067646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.389 [2024-07-26 14:24:14.067678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.074 ms 00:18:54.389 [2024-07-26 14:24:14.067690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.081798] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:54.389 [2024-07-26 14:24:14.081839] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:54.389 [2024-07-26 14:24:14.081873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.081884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:54.389 [2024-07-26 14:24:14.081896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.023 ms 00:18:54.389 [2024-07-26 14:24:14.081907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.108058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.108098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.389 [2024-07-26 14:24:14.108130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.008 ms 00:18:54.389 [2024-07-26 14:24:14.108141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.121848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.121886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.389 [2024-07-26 14:24:14.121946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.618 ms 00:18:54.389 [2024-07-26 14:24:14.121958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.135552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.135611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.389 [2024-07-26 14:24:14.135658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.492 ms 00:18:54.389 [2024-07-26 14:24:14.135669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.389 [2024-07-26 14:24:14.136509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.389 [2024-07-26 14:24:14.136547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.389 [2024-07-26 14:24:14.136563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:18:54.389 [2024-07-26 14:24:14.136573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.199695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.199767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.648 [2024-07-26 14:24:14.199804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.088 ms 00:18:54.648 [2024-07-26 14:24:14.199815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.211796] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:54.648 [2024-07-26 14:24:14.225742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.225842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.648 [2024-07-26 14:24:14.225875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.712 ms 00:18:54.648 [2024-07-26 14:24:14.225887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.226078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.226100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.648 [2024-07-26 14:24:14.226113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:54.648 [2024-07-26 14:24:14.226124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.226190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.226208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.648 [2024-07-26 14:24:14.226220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:54.648 [2024-07-26 14:24:14.226230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.226292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.226314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.648 [2024-07-26 14:24:14.226325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:54.648 [2024-07-26 14:24:14.226336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.226408] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.648 [2024-07-26 14:24:14.226441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.226454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.648 [2024-07-26 14:24:14.226466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:54.648 [2024-07-26 14:24:14.226477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.253457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.253517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.648 [2024-07-26 14:24:14.253550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.949 ms 00:18:54.648 [2024-07-26 14:24:14.253561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.253681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.648 [2024-07-26 14:24:14.253701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.648 [2024-07-26 14:24:14.253714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:54.648 [2024-07-26 14:24:14.253724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.648 [2024-07-26 14:24:14.254808] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.648 [2024-07-26 14:24:14.258486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.746 ms, result 0 00:18:54.648 [2024-07-26 14:24:14.259345] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:54.648 [2024-07-26 14:24:14.274236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.384  Copying: 24/256 [MB] (24 MBps) Copying: 46/256 [MB] (21 MBps) Copying: 67/256 [MB] (21 MBps) Copying: 89/256 [MB] (22 MBps) Copying: 111/256 [MB] (21 MBps) Copying: 132/256 [MB] (21 MBps) Copying: 154/256 [MB] (21 MBps) Copying: 175/256 [MB] (21 MBps) Copying: 197/256 [MB] (21 MBps) Copying: 218/256 [MB] (21 MBps) Copying: 240/256 [MB] (21 MBps) Copying: 256/256 [MB] (average 21 MBps)[2024-07-26 14:24:25.985941] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.384 [2024-07-26 14:24:25.997757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:25.997803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:06.384 [2024-07-26 14:24:25.997839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:06.384 [2024-07-26 14:24:25.997851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:25.997890] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:06.384 [2024-07-26 14:24:26.001145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.001177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:06.384 [2024-07-26 14:24:26.001225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.218 ms 00:19:06.384 [2024-07-26 14:24:26.001237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.001510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.001529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:06.384 [2024-07-26 14:24:26.001541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:19:06.384 [2024-07-26 14:24:26.001552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.005042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.005071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:06.384 [2024-07-26 14:24:26.005108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.469 ms 00:19:06.384 [2024-07-26 14:24:26.005119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.012162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.012193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:06.384 [2024-07-26 14:24:26.012224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.021 ms 00:19:06.384 [2024-07-26 14:24:26.012249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.040220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.040258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:06.384 [2024-07-26 14:24:26.040274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.909 ms 00:19:06.384 [2024-07-26 14:24:26.040284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.056139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.056177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:06.384 [2024-07-26 14:24:26.056210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.796 ms 00:19:06.384 [2024-07-26 14:24:26.056227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.056375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.056395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:06.384 [2024-07-26 14:24:26.056407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:06.384 [2024-07-26 14:24:26.056418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.384 [2024-07-26 14:24:26.083435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.384 [2024-07-26 14:24:26.083473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:06.384 [2024-07-26 14:24:26.083505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.996 ms 00:19:06.384 [2024-07-26 14:24:26.083516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.385 [2024-07-26 14:24:26.113106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.385 [2024-07-26 14:24:26.113145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:06.385 [2024-07-26 14:24:26.113177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.531 ms 00:19:06.385 [2024-07-26 14:24:26.113219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.645 [2024-07-26 14:24:26.145329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.645 [2024-07-26 14:24:26.145372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:06.645 [2024-07-26 14:24:26.145390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.048 ms 00:19:06.645 [2024-07-26 14:24:26.145402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.645 [2024-07-26 14:24:26.176141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.645 [2024-07-26 14:24:26.176193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:06.645 [2024-07-26 14:24:26.176225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.643 ms 00:19:06.645 [2024-07-26 14:24:26.176236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.645 [2024-07-26 14:24:26.176318] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:06.645 [2024-07-26 14:24:26.176350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.176998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:06.645 [2024-07-26 14:24:26.177234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:06.646 [2024-07-26 14:24:26.177679] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:06.646 [2024-07-26 14:24:26.177691] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:19:06.646 [2024-07-26 14:24:26.177702] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:06.646 [2024-07-26 14:24:26.177713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:06.646 [2024-07-26 14:24:26.177736] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:06.646 [2024-07-26 14:24:26.177748] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:06.646 [2024-07-26 14:24:26.177759] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:06.646 [2024-07-26 14:24:26.177770] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:06.646 [2024-07-26 14:24:26.177780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:06.646 [2024-07-26 14:24:26.177790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:06.646 [2024-07-26 14:24:26.177799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:06.646 [2024-07-26 14:24:26.177810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.646 [2024-07-26 14:24:26.177821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:06.646 [2024-07-26 14:24:26.177838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:19:06.646 [2024-07-26 14:24:26.177849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.192523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.646 [2024-07-26 14:24:26.192558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:06.646 [2024-07-26 14:24:26.192590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.649 ms 00:19:06.646 [2024-07-26 14:24:26.192600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.193004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.646 [2024-07-26 14:24:26.193069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:06.646 [2024-07-26 14:24:26.193083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:19:06.646 [2024-07-26 14:24:26.193094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.226755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.226800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.646 [2024-07-26 14:24:26.226832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.226842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.226961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.226983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.646 [2024-07-26 14:24:26.226995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.227005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.227079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.227097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.646 [2024-07-26 14:24:26.227108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.227119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.227142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.227156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.646 [2024-07-26 14:24:26.227172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.227182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.308874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.308965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.646 [2024-07-26 14:24:26.309000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.309011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.646 [2024-07-26 14:24:26.379124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.646 [2024-07-26 14:24:26.379239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.646 [2024-07-26 14:24:26.379303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.646 [2024-07-26 14:24:26.379453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:06.646 [2024-07-26 14:24:26.379535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.646 [2024-07-26 14:24:26.379595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.646 [2024-07-26 14:24:26.379610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.646 [2024-07-26 14:24:26.379620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.646 [2024-07-26 14:24:26.379656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.647 [2024-07-26 14:24:26.379724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:06.647 [2024-07-26 14:24:26.379740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.647 [2024-07-26 14:24:26.379752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:06.647 [2024-07-26 14:24:26.379767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.647 [2024-07-26 14:24:26.379920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 382.153 ms, result 0 00:19:07.582 00:19:07.582 00:19:07.582 14:24:27 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:07.582 14:24:27 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:08.149 14:24:27 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:08.408 [2024-07-26 14:24:27.941588] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:08.408 [2024-07-26 14:24:27.941735] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79307 ] 00:19:08.408 [2024-07-26 14:24:28.102270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.666 [2024-07-26 14:24:28.283036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.925 [2024-07-26 14:24:28.562543] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:08.925 [2024-07-26 14:24:28.562645] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:09.185 [2024-07-26 14:24:28.722825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.722879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:09.185 [2024-07-26 14:24:28.722947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:09.185 [2024-07-26 14:24:28.722964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.725970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.726013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.185 [2024-07-26 14:24:28.726062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:19:09.185 [2024-07-26 14:24:28.726073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.726326] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:09.185 [2024-07-26 14:24:28.727280] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:09.185 [2024-07-26 14:24:28.727319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.727349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.185 [2024-07-26 14:24:28.727360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:19:09.185 [2024-07-26 14:24:28.727370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.728751] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:09.185 [2024-07-26 14:24:28.742844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.742883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:09.185 [2024-07-26 14:24:28.742950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:19:09.185 [2024-07-26 14:24:28.742961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.743087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.743108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:09.185 [2024-07-26 14:24:28.743121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:09.185 [2024-07-26 14:24:28.743131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.747466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.747508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.185 [2024-07-26 14:24:28.747538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.283 ms 00:19:09.185 [2024-07-26 14:24:28.747549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.747692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.747715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.185 [2024-07-26 14:24:28.747727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:09.185 [2024-07-26 14:24:28.747738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.747783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.747799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:09.185 [2024-07-26 14:24:28.747815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:09.185 [2024-07-26 14:24:28.747825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.747855] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:09.185 [2024-07-26 14:24:28.751695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.751732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.185 [2024-07-26 14:24:28.751747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.849 ms 00:19:09.185 [2024-07-26 14:24:28.751757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.751823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.751841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:09.185 [2024-07-26 14:24:28.751852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:09.185 [2024-07-26 14:24:28.751862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.751886] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:09.185 [2024-07-26 14:24:28.751957] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:09.185 [2024-07-26 14:24:28.752016] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:09.185 [2024-07-26 14:24:28.752035] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:09.185 [2024-07-26 14:24:28.752141] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:09.185 [2024-07-26 14:24:28.752155] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:09.185 [2024-07-26 14:24:28.752167] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:09.185 [2024-07-26 14:24:28.752181] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:09.185 [2024-07-26 14:24:28.752192] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:09.185 [2024-07-26 14:24:28.752207] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:09.185 [2024-07-26 14:24:28.752217] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:09.185 [2024-07-26 14:24:28.752227] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:09.185 [2024-07-26 14:24:28.752236] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:09.185 [2024-07-26 14:24:28.752246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.752256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:09.185 [2024-07-26 14:24:28.752267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:19:09.185 [2024-07-26 14:24:28.752277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.752389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.185 [2024-07-26 14:24:28.752403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:09.185 [2024-07-26 14:24:28.752418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:09.185 [2024-07-26 14:24:28.752428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.185 [2024-07-26 14:24:28.752539] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:09.185 [2024-07-26 14:24:28.752554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:09.185 [2024-07-26 14:24:28.752565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.185 [2024-07-26 14:24:28.752576] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.185 [2024-07-26 14:24:28.752586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:09.185 [2024-07-26 14:24:28.752595] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:09.185 [2024-07-26 14:24:28.752605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:09.185 [2024-07-26 14:24:28.752615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:09.185 [2024-07-26 14:24:28.752625] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:09.185 [2024-07-26 14:24:28.752650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.185 [2024-07-26 14:24:28.752659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:09.185 [2024-07-26 14:24:28.752668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:09.185 [2024-07-26 14:24:28.752677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:09.185 [2024-07-26 14:24:28.752687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:09.185 [2024-07-26 14:24:28.752696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:09.185 [2024-07-26 14:24:28.752705] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.185 [2024-07-26 14:24:28.752714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:09.186 [2024-07-26 14:24:28.752723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:09.186 [2024-07-26 14:24:28.752744] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:09.186 [2024-07-26 14:24:28.752763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.186 [2024-07-26 14:24:28.752795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:09.186 [2024-07-26 14:24:28.752804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752812] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.186 [2024-07-26 14:24:28.752821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:09.186 [2024-07-26 14:24:28.752830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.186 [2024-07-26 14:24:28.752846] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:09.186 [2024-07-26 14:24:28.752855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:09.186 [2024-07-26 14:24:28.752872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:09.186 [2024-07-26 14:24:28.752897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752905] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.186 [2024-07-26 14:24:28.752913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:09.186 [2024-07-26 14:24:28.752922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:09.186 [2024-07-26 14:24:28.752930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:09.186 [2024-07-26 14:24:28.752938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:09.186 [2024-07-26 14:24:28.752947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:09.186 [2024-07-26 14:24:28.752955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:09.186 [2024-07-26 14:24:28.752972] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:09.186 [2024-07-26 14:24:28.752981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.186 [2024-07-26 14:24:28.752990] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:09.186 [2024-07-26 14:24:28.752999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:09.186 [2024-07-26 14:24:28.753009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:09.186 [2024-07-26 14:24:28.753018] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:09.186 [2024-07-26 14:24:28.753522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:09.186 [2024-07-26 14:24:28.753568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:09.186 [2024-07-26 14:24:28.753617] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:09.186 [2024-07-26 14:24:28.753733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:09.186 [2024-07-26 14:24:28.753782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:09.186 [2024-07-26 14:24:28.753819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:09.186 [2024-07-26 14:24:28.753854] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:09.186 [2024-07-26 14:24:28.754023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:09.186 [2024-07-26 14:24:28.754298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:09.186 [2024-07-26 14:24:28.754438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:09.186 [2024-07-26 14:24:28.754634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:09.186 [2024-07-26 14:24:28.754691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:09.186 [2024-07-26 14:24:28.754822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:09.186 [2024-07-26 14:24:28.754845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:09.186 [2024-07-26 14:24:28.754857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:09.186 [2024-07-26 14:24:28.754867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:09.186 [2024-07-26 14:24:28.754878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:09.186 [2024-07-26 14:24:28.754948] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:09.186 [2024-07-26 14:24:28.754961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:09.186 [2024-07-26 14:24:28.754985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:09.186 [2024-07-26 14:24:28.754996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:09.186 [2024-07-26 14:24:28.755007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:09.186 [2024-07-26 14:24:28.755020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.755031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:09.186 [2024-07-26 14:24:28.755044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.538 ms 00:19:09.186 [2024-07-26 14:24:28.755086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.795626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.795958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.186 [2024-07-26 14:24:28.796104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.426 ms 00:19:09.186 [2024-07-26 14:24:28.796155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.796478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.796663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:09.186 [2024-07-26 14:24:28.796786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:09.186 [2024-07-26 14:24:28.796837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.829773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.830113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.186 [2024-07-26 14:24:28.830240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.785 ms 00:19:09.186 [2024-07-26 14:24:28.830292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.830590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.830645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.186 [2024-07-26 14:24:28.830819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:09.186 [2024-07-26 14:24:28.830871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.831314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.831377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.186 [2024-07-26 14:24:28.831523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:09.186 [2024-07-26 14:24:28.831574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.831794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.831862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.186 [2024-07-26 14:24:28.832062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:19:09.186 [2024-07-26 14:24:28.832115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.846669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.846863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.186 [2024-07-26 14:24:28.847036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.491 ms 00:19:09.186 [2024-07-26 14:24:28.847087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.861302] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:09.186 [2024-07-26 14:24:28.861534] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:09.186 [2024-07-26 14:24:28.861684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.861791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:09.186 [2024-07-26 14:24:28.861840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.326 ms 00:19:09.186 [2024-07-26 14:24:28.861963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.186 [2024-07-26 14:24:28.888147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.186 [2024-07-26 14:24:28.888351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:09.186 [2024-07-26 14:24:28.888474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.995 ms 00:19:09.186 [2024-07-26 14:24:28.888524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.187 [2024-07-26 14:24:28.902439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.187 [2024-07-26 14:24:28.902475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:09.187 [2024-07-26 14:24:28.902505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.726 ms 00:19:09.187 [2024-07-26 14:24:28.902515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.187 [2024-07-26 14:24:28.916022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.187 [2024-07-26 14:24:28.916074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:09.187 [2024-07-26 14:24:28.916089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.424 ms 00:19:09.187 [2024-07-26 14:24:28.916098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.187 [2024-07-26 14:24:28.916870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.187 [2024-07-26 14:24:28.916941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:09.187 [2024-07-26 14:24:28.916959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:19:09.187 [2024-07-26 14:24:28.916969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.445 [2024-07-26 14:24:28.980218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.445 [2024-07-26 14:24:28.980286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:09.445 [2024-07-26 14:24:28.980322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.214 ms 00:19:09.445 [2024-07-26 14:24:28.980333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.445 [2024-07-26 14:24:28.992253] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:09.445 [2024-07-26 14:24:29.005693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.445 [2024-07-26 14:24:29.005754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:09.445 [2024-07-26 14:24:29.005788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.212 ms 00:19:09.445 [2024-07-26 14:24:29.005799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.445 [2024-07-26 14:24:29.005971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.445 [2024-07-26 14:24:29.005992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:09.445 [2024-07-26 14:24:29.006004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:09.445 [2024-07-26 14:24:29.006015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.006104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.446 [2024-07-26 14:24:29.006124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:09.446 [2024-07-26 14:24:29.006135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:09.446 [2024-07-26 14:24:29.006145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.006194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.446 [2024-07-26 14:24:29.006214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:09.446 [2024-07-26 14:24:29.006229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:09.446 [2024-07-26 14:24:29.006239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.006275] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:09.446 [2024-07-26 14:24:29.006291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.446 [2024-07-26 14:24:29.006302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:09.446 [2024-07-26 14:24:29.006313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:09.446 [2024-07-26 14:24:29.006323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.034790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.446 [2024-07-26 14:24:29.034833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:09.446 [2024-07-26 14:24:29.034864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.437 ms 00:19:09.446 [2024-07-26 14:24:29.034874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.035053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.446 [2024-07-26 14:24:29.035075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:09.446 [2024-07-26 14:24:29.035088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:09.446 [2024-07-26 14:24:29.035098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.446 [2024-07-26 14:24:29.036173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:09.446 [2024-07-26 14:24:29.039966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.930 ms, result 0 00:19:09.446 [2024-07-26 14:24:29.040944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:09.446 [2024-07-26 14:24:29.055479] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:09.706  Copying: 4096/4096 [kB] (average 22 MBps)[2024-07-26 14:24:29.237408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:09.706 [2024-07-26 14:24:29.248332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.248383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:09.706 [2024-07-26 14:24:29.248417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:09.706 [2024-07-26 14:24:29.248427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.248462] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:09.706 [2024-07-26 14:24:29.251341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.251370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:09.706 [2024-07-26 14:24:29.251399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:19:09.706 [2024-07-26 14:24:29.251409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.253195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.253232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:09.706 [2024-07-26 14:24:29.253278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:19:09.706 [2024-07-26 14:24:29.253288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.257043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.257080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:09.706 [2024-07-26 14:24:29.257118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.735 ms 00:19:09.706 [2024-07-26 14:24:29.257129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.263566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.263598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:09.706 [2024-07-26 14:24:29.263627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.397 ms 00:19:09.706 [2024-07-26 14:24:29.263637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.292299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.292338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:09.706 [2024-07-26 14:24:29.292370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.574 ms 00:19:09.706 [2024-07-26 14:24:29.292380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.308190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.308230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:09.706 [2024-07-26 14:24:29.308262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.750 ms 00:19:09.706 [2024-07-26 14:24:29.308279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.308456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.308476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:09.706 [2024-07-26 14:24:29.308488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:09.706 [2024-07-26 14:24:29.308498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.335727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.335768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:09.706 [2024-07-26 14:24:29.335785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.209 ms 00:19:09.706 [2024-07-26 14:24:29.335795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.362548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.362585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:09.706 [2024-07-26 14:24:29.362615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.664 ms 00:19:09.706 [2024-07-26 14:24:29.362625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.389040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.389076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:09.706 [2024-07-26 14:24:29.389107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.359 ms 00:19:09.706 [2024-07-26 14:24:29.389117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.416811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.706 [2024-07-26 14:24:29.416850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:09.706 [2024-07-26 14:24:29.416881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.611 ms 00:19:09.706 [2024-07-26 14:24:29.416891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.706 [2024-07-26 14:24:29.416987] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:09.706 [2024-07-26 14:24:29.417014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:09.706 [2024-07-26 14:24:29.417303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.417999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:09.707 [2024-07-26 14:24:29.418290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:09.707 [2024-07-26 14:24:29.418302] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:19:09.707 [2024-07-26 14:24:29.418314] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:09.707 [2024-07-26 14:24:29.418325] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:09.707 [2024-07-26 14:24:29.418351] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:09.707 [2024-07-26 14:24:29.418363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:09.707 [2024-07-26 14:24:29.418374] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:09.707 [2024-07-26 14:24:29.418396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:09.707 [2024-07-26 14:24:29.418407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:09.707 [2024-07-26 14:24:29.418417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:09.707 [2024-07-26 14:24:29.418427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:09.708 [2024-07-26 14:24:29.418438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.708 [2024-07-26 14:24:29.418449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:09.708 [2024-07-26 14:24:29.418466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.453 ms 00:19:09.708 [2024-07-26 14:24:29.418481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.708 [2024-07-26 14:24:29.434798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.708 [2024-07-26 14:24:29.434848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:09.708 [2024-07-26 14:24:29.434880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.290 ms 00:19:09.708 [2024-07-26 14:24:29.434890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.708 [2024-07-26 14:24:29.435424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.708 [2024-07-26 14:24:29.435466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:09.708 [2024-07-26 14:24:29.435498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:19:09.708 [2024-07-26 14:24:29.435523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.475844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.475913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.967 [2024-07-26 14:24:29.475941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.475955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.476156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.476202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.967 [2024-07-26 14:24:29.476215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.476226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.476295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.476315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.967 [2024-07-26 14:24:29.476327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.476339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.476364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.476385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.967 [2024-07-26 14:24:29.476396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.476407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.570336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.570408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.967 [2024-07-26 14:24:29.570441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.570452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.967 [2024-07-26 14:24:29.650378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.650389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:09.967 [2024-07-26 14:24:29.650522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.650532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:09.967 [2024-07-26 14:24:29.650617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.650632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:09.967 [2024-07-26 14:24:29.650776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.650786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:09.967 [2024-07-26 14:24:29.650866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.650882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.650960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.650991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:09.967 [2024-07-26 14:24:29.651005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.651017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.651072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:09.967 [2024-07-26 14:24:29.651089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:09.967 [2024-07-26 14:24:29.651101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:09.967 [2024-07-26 14:24:29.651118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.967 [2024-07-26 14:24:29.651277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 402.933 ms, result 0 00:19:10.904 00:19:10.904 00:19:10.904 14:24:30 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79332 00:19:10.904 14:24:30 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:10.904 14:24:30 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79332 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 79332 ']' 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.904 14:24:30 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:11.163 [2024-07-26 14:24:30.720999] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:11.163 [2024-07-26 14:24:30.721199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79332 ] 00:19:11.163 [2024-07-26 14:24:30.891037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.421 [2024-07-26 14:24:31.098087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.988 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.988 14:24:31 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:19:11.988 14:24:31 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:12.247 [2024-07-26 14:24:31.980667] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.247 [2024-07-26 14:24:31.980788] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:12.507 [2024-07-26 14:24:32.156615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.156714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:12.507 [2024-07-26 14:24:32.156733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:12.507 [2024-07-26 14:24:32.156747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.159676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.159753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.507 [2024-07-26 14:24:32.159770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:19:12.507 [2024-07-26 14:24:32.159784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.160046] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:12.507 [2024-07-26 14:24:32.160992] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:12.507 [2024-07-26 14:24:32.161027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.161060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.507 [2024-07-26 14:24:32.161072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:19:12.507 [2024-07-26 14:24:32.161088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.162467] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:12.507 [2024-07-26 14:24:32.176294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.176349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:12.507 [2024-07-26 14:24:32.176384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.823 ms 00:19:12.507 [2024-07-26 14:24:32.176397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.176555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.176586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:12.507 [2024-07-26 14:24:32.176603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:12.507 [2024-07-26 14:24:32.176616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.180916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.180988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.507 [2024-07-26 14:24:32.181027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.236 ms 00:19:12.507 [2024-07-26 14:24:32.181038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.181209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.181238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.507 [2024-07-26 14:24:32.181256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:12.507 [2024-07-26 14:24:32.181272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.181315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.181330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:12.507 [2024-07-26 14:24:32.181345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:12.507 [2024-07-26 14:24:32.181361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.181403] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:12.507 [2024-07-26 14:24:32.185199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.185266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.507 [2024-07-26 14:24:32.185281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.808 ms 00:19:12.507 [2024-07-26 14:24:32.185294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.185354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.185377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:12.507 [2024-07-26 14:24:32.185392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:12.507 [2024-07-26 14:24:32.185405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.185464] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:12.507 [2024-07-26 14:24:32.185493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:12.507 [2024-07-26 14:24:32.185543] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:12.507 [2024-07-26 14:24:32.185572] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:12.507 [2024-07-26 14:24:32.185673] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:12.507 [2024-07-26 14:24:32.185708] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:12.507 [2024-07-26 14:24:32.185725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:12.507 [2024-07-26 14:24:32.185743] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:12.507 [2024-07-26 14:24:32.185757] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:12.507 [2024-07-26 14:24:32.185771] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:12.507 [2024-07-26 14:24:32.185783] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:12.507 [2024-07-26 14:24:32.185796] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:12.507 [2024-07-26 14:24:32.185809] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:12.507 [2024-07-26 14:24:32.185825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.185837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:12.507 [2024-07-26 14:24:32.185851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:19:12.507 [2024-07-26 14:24:32.185864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.185973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.507 [2024-07-26 14:24:32.185997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:12.507 [2024-07-26 14:24:32.186014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:12.507 [2024-07-26 14:24:32.186026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.507 [2024-07-26 14:24:32.186140] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:12.507 [2024-07-26 14:24:32.186167] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:12.507 [2024-07-26 14:24:32.186184] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:12.507 [2024-07-26 14:24:32.186227] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186240] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:12.507 [2024-07-26 14:24:32.186267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186278] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.507 [2024-07-26 14:24:32.186291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:12.507 [2024-07-26 14:24:32.186302] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:12.507 [2024-07-26 14:24:32.186315] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:12.507 [2024-07-26 14:24:32.186326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:12.507 [2024-07-26 14:24:32.186339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:12.507 [2024-07-26 14:24:32.186350] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:12.507 [2024-07-26 14:24:32.186376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186389] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:12.507 [2024-07-26 14:24:32.186413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:12.507 [2024-07-26 14:24:32.186448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186474] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:12.507 [2024-07-26 14:24:32.186487] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186507] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:12.507 [2024-07-26 14:24:32.186535] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186548] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:12.507 [2024-07-26 14:24:32.186559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:12.507 [2024-07-26 14:24:32.186571] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.507 [2024-07-26 14:24:32.186595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:12.507 [2024-07-26 14:24:32.186607] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:12.507 [2024-07-26 14:24:32.186620] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:12.507 [2024-07-26 14:24:32.186631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:12.507 [2024-07-26 14:24:32.186644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:12.507 [2024-07-26 14:24:32.186655] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:12.507 [2024-07-26 14:24:32.186681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:12.507 [2024-07-26 14:24:32.186694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.507 [2024-07-26 14:24:32.186704] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:12.508 [2024-07-26 14:24:32.186732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:12.508 [2024-07-26 14:24:32.186744] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:12.508 [2024-07-26 14:24:32.186757] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:12.508 [2024-07-26 14:24:32.186768] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:12.508 [2024-07-26 14:24:32.186782] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:12.508 [2024-07-26 14:24:32.186793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:12.508 [2024-07-26 14:24:32.186806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:12.508 [2024-07-26 14:24:32.186816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:12.508 [2024-07-26 14:24:32.186829] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:12.508 [2024-07-26 14:24:32.186840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:12.508 [2024-07-26 14:24:32.186873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.186888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:12.508 [2024-07-26 14:24:32.186905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:12.508 [2024-07-26 14:24:32.186917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:12.508 [2024-07-26 14:24:32.186961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:12.508 [2024-07-26 14:24:32.186976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:12.508 [2024-07-26 14:24:32.186990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:12.508 [2024-07-26 14:24:32.187003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:12.508 [2024-07-26 14:24:32.187017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:12.508 [2024-07-26 14:24:32.187029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:12.508 [2024-07-26 14:24:32.187043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:12.508 [2024-07-26 14:24:32.187108] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:12.508 [2024-07-26 14:24:32.187124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:12.508 [2024-07-26 14:24:32.187186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:12.508 [2024-07-26 14:24:32.187198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:12.508 [2024-07-26 14:24:32.187212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:12.508 [2024-07-26 14:24:32.187226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.187241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:12.508 [2024-07-26 14:24:32.187253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.150 ms 00:19:12.508 [2024-07-26 14:24:32.187284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.217199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.217275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.508 [2024-07-26 14:24:32.217314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.817 ms 00:19:12.508 [2024-07-26 14:24:32.217327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.217541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.217572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.508 [2024-07-26 14:24:32.217587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:12.508 [2024-07-26 14:24:32.217600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.248237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.248318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.508 [2024-07-26 14:24:32.248335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.607 ms 00:19:12.508 [2024-07-26 14:24:32.248348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.248516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.248548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.508 [2024-07-26 14:24:32.248562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:12.508 [2024-07-26 14:24:32.248575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.248876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.248923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.508 [2024-07-26 14:24:32.248939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:12.508 [2024-07-26 14:24:32.248952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.249093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.249122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.508 [2024-07-26 14:24:32.249135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:12.508 [2024-07-26 14:24:32.249149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.508 [2024-07-26 14:24:32.264545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.508 [2024-07-26 14:24:32.264635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.508 [2024-07-26 14:24:32.264668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.353 ms 00:19:12.508 [2024-07-26 14:24:32.264682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.279113] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:12.767 [2024-07-26 14:24:32.279172] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:12.767 [2024-07-26 14:24:32.279208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.279222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:12.767 [2024-07-26 14:24:32.279234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.348 ms 00:19:12.767 [2024-07-26 14:24:32.279247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.303188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.303245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:12.767 [2024-07-26 14:24:32.303277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.790 ms 00:19:12.767 [2024-07-26 14:24:32.303293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.315874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.315959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:12.767 [2024-07-26 14:24:32.316001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.451 ms 00:19:12.767 [2024-07-26 14:24:32.316017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.328690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.328744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:12.767 [2024-07-26 14:24:32.328775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.535 ms 00:19:12.767 [2024-07-26 14:24:32.328787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.329679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.329726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.767 [2024-07-26 14:24:32.329757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:19:12.767 [2024-07-26 14:24:32.329770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.398634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.398737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:12.767 [2024-07-26 14:24:32.398759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.834 ms 00:19:12.767 [2024-07-26 14:24:32.398773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.409049] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:12.767 [2024-07-26 14:24:32.420991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.421062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.767 [2024-07-26 14:24:32.421101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.990 ms 00:19:12.767 [2024-07-26 14:24:32.421113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.421245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.421264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:12.767 [2024-07-26 14:24:32.421294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:12.767 [2024-07-26 14:24:32.421320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.421403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.421421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.767 [2024-07-26 14:24:32.421438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:12.767 [2024-07-26 14:24:32.421450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.421484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.421498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.767 [2024-07-26 14:24:32.421512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:12.767 [2024-07-26 14:24:32.421524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.421566] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:12.767 [2024-07-26 14:24:32.421592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.421609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:12.767 [2024-07-26 14:24:32.421622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:12.767 [2024-07-26 14:24:32.421638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.446829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.446924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.767 [2024-07-26 14:24:32.446961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.159 ms 00:19:12.767 [2024-07-26 14:24:32.446974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.447111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.767 [2024-07-26 14:24:32.447137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.767 [2024-07-26 14:24:32.447152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:12.767 [2024-07-26 14:24:32.447181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.767 [2024-07-26 14:24:32.448367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:12.767 [2024-07-26 14:24:32.452214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.288 ms, result 0 00:19:12.767 [2024-07-26 14:24:32.453420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:12.767 Some configs were skipped because the RPC state that can call them passed over. 00:19:12.767 14:24:32 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:13.026 [2024-07-26 14:24:32.721112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.026 [2024-07-26 14:24:32.721171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:13.026 [2024-07-26 14:24:32.721199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:19:13.026 [2024-07-26 14:24:32.721213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.026 [2024-07-26 14:24:32.721269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.660 ms, result 0 00:19:13.026 true 00:19:13.026 14:24:32 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:13.285 [2024-07-26 14:24:32.973330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.285 [2024-07-26 14:24:32.973395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:13.285 [2024-07-26 14:24:32.973415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.332 ms 00:19:13.285 [2024-07-26 14:24:32.973428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.285 [2024-07-26 14:24:32.973475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.492 ms, result 0 00:19:13.285 true 00:19:13.285 14:24:32 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79332 00:19:13.285 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79332 ']' 00:19:13.285 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79332 00:19:13.285 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:19:13.285 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.285 14:24:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79332 00:19:13.285 killing process with pid 79332 00:19:13.285 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.285 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.285 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79332' 00:19:13.285 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 79332 00:19:13.285 14:24:33 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 79332 00:19:14.222 [2024-07-26 14:24:33.817332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.222 [2024-07-26 14:24:33.817412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:14.222 [2024-07-26 14:24:33.817450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:14.222 [2024-07-26 14:24:33.817464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.222 [2024-07-26 14:24:33.817496] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:14.222 [2024-07-26 14:24:33.820552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.222 [2024-07-26 14:24:33.820601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:14.222 [2024-07-26 14:24:33.820630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.035 ms 00:19:14.222 [2024-07-26 14:24:33.820645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.222 [2024-07-26 14:24:33.820984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.222 [2024-07-26 14:24:33.821013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:14.222 [2024-07-26 14:24:33.821027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:14.222 [2024-07-26 14:24:33.821041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.222 [2024-07-26 14:24:33.824866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.222 [2024-07-26 14:24:33.825013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:14.222 [2024-07-26 14:24:33.825033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:19:14.222 [2024-07-26 14:24:33.825047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.831554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.831605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:14.223 [2024-07-26 14:24:33.831635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.432 ms 00:19:14.223 [2024-07-26 14:24:33.831650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.842642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.842714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:14.223 [2024-07-26 14:24:33.842730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.897 ms 00:19:14.223 [2024-07-26 14:24:33.842746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.850685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.850745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:14.223 [2024-07-26 14:24:33.850776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.898 ms 00:19:14.223 [2024-07-26 14:24:33.850789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.850937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.850960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:14.223 [2024-07-26 14:24:33.851005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:14.223 [2024-07-26 14:24:33.851045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.862419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.862488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:14.223 [2024-07-26 14:24:33.862504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.349 ms 00:19:14.223 [2024-07-26 14:24:33.862516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.873906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.873961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:14.223 [2024-07-26 14:24:33.873992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.351 ms 00:19:14.223 [2024-07-26 14:24:33.874008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.885014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.885083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:14.223 [2024-07-26 14:24:33.885099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.964 ms 00:19:14.223 [2024-07-26 14:24:33.885112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.896456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.223 [2024-07-26 14:24:33.896528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:14.223 [2024-07-26 14:24:33.896558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.279 ms 00:19:14.223 [2024-07-26 14:24:33.896572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.223 [2024-07-26 14:24:33.896615] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:14.223 [2024-07-26 14:24:33.896642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.896989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:14.223 [2024-07-26 14:24:33.897569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.897997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:14.224 [2024-07-26 14:24:33.898150] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:14.224 [2024-07-26 14:24:33.898163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:19:14.224 [2024-07-26 14:24:33.898180] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:14.224 [2024-07-26 14:24:33.898193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:14.224 [2024-07-26 14:24:33.898207] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:14.224 [2024-07-26 14:24:33.898220] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:14.224 [2024-07-26 14:24:33.898234] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:14.224 [2024-07-26 14:24:33.898247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:14.224 [2024-07-26 14:24:33.898262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:14.224 [2024-07-26 14:24:33.898273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:14.224 [2024-07-26 14:24:33.898299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:14.224 [2024-07-26 14:24:33.898312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.224 [2024-07-26 14:24:33.898327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:14.224 [2024-07-26 14:24:33.898341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:19:14.224 [2024-07-26 14:24:33.898359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.913619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.224 [2024-07-26 14:24:33.913687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:14.224 [2024-07-26 14:24:33.913703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.218 ms 00:19:14.224 [2024-07-26 14:24:33.913718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.914271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.224 [2024-07-26 14:24:33.914341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:14.224 [2024-07-26 14:24:33.914360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:19:14.224 [2024-07-26 14:24:33.914375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.962692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.224 [2024-07-26 14:24:33.962780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.224 [2024-07-26 14:24:33.962796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.224 [2024-07-26 14:24:33.962810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.962936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.224 [2024-07-26 14:24:33.962958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.224 [2024-07-26 14:24:33.962973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.224 [2024-07-26 14:24:33.962986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.963091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.224 [2024-07-26 14:24:33.963112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.224 [2024-07-26 14:24:33.963126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.224 [2024-07-26 14:24:33.963142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.224 [2024-07-26 14:24:33.963167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.224 [2024-07-26 14:24:33.963183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.224 [2024-07-26 14:24:33.963196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.224 [2024-07-26 14:24:33.963212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.046756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.046853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.483 [2024-07-26 14:24:34.046872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.483 [2024-07-26 14:24:34.046885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.117169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.117264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.483 [2024-07-26 14:24:34.117286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.483 [2024-07-26 14:24:34.117301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.117417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.117439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.483 [2024-07-26 14:24:34.117454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.483 [2024-07-26 14:24:34.117469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.117504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.117551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.483 [2024-07-26 14:24:34.117578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.483 [2024-07-26 14:24:34.117591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.117707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.117730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.483 [2024-07-26 14:24:34.117743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.483 [2024-07-26 14:24:34.117756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.483 [2024-07-26 14:24:34.117806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.483 [2024-07-26 14:24:34.117827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.484 [2024-07-26 14:24:34.117840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.484 [2024-07-26 14:24:34.117854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.484 [2024-07-26 14:24:34.117920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.484 [2024-07-26 14:24:34.117938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.484 [2024-07-26 14:24:34.117951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.484 [2024-07-26 14:24:34.117968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.484 [2024-07-26 14:24:34.118055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.484 [2024-07-26 14:24:34.118079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.484 [2024-07-26 14:24:34.118093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.484 [2024-07-26 14:24:34.118107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.484 [2024-07-26 14:24:34.118284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 300.931 ms, result 0 00:19:15.419 14:24:34 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:15.419 [2024-07-26 14:24:35.072369] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:15.419 [2024-07-26 14:24:35.072553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79396 ] 00:19:15.698 [2024-07-26 14:24:35.242530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.698 [2024-07-26 14:24:35.398773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.959 [2024-07-26 14:24:35.685891] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.959 [2024-07-26 14:24:35.686010] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:16.218 [2024-07-26 14:24:35.847230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.847313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:16.218 [2024-07-26 14:24:35.847349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:16.218 [2024-07-26 14:24:35.847361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.850366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.850421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.218 [2024-07-26 14:24:35.850453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:19:16.218 [2024-07-26 14:24:35.850463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.850602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:16.218 [2024-07-26 14:24:35.851579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:16.218 [2024-07-26 14:24:35.851633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.851661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.218 [2024-07-26 14:24:35.851698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:19:16.218 [2024-07-26 14:24:35.851709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.853103] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:16.218 [2024-07-26 14:24:35.867118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.867170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:16.218 [2024-07-26 14:24:35.867207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.016 ms 00:19:16.218 [2024-07-26 14:24:35.867217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.867326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.867347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:16.218 [2024-07-26 14:24:35.867359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:16.218 [2024-07-26 14:24:35.867368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.871549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.871602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.218 [2024-07-26 14:24:35.871631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:19:16.218 [2024-07-26 14:24:35.871642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.871783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.871805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.218 [2024-07-26 14:24:35.871818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:16.218 [2024-07-26 14:24:35.871829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.871870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.871887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:16.218 [2024-07-26 14:24:35.871918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:16.218 [2024-07-26 14:24:35.871953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.871992] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:16.218 [2024-07-26 14:24:35.875837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.875890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.218 [2024-07-26 14:24:35.875917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.854 ms 00:19:16.218 [2024-07-26 14:24:35.875928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.876025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.218 [2024-07-26 14:24:35.876043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:16.218 [2024-07-26 14:24:35.876054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:16.218 [2024-07-26 14:24:35.876064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.218 [2024-07-26 14:24:35.876106] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:16.218 [2024-07-26 14:24:35.876182] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:16.218 [2024-07-26 14:24:35.876228] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:16.218 [2024-07-26 14:24:35.876250] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:16.218 [2024-07-26 14:24:35.876351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:16.218 [2024-07-26 14:24:35.876367] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:16.219 [2024-07-26 14:24:35.876382] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:16.219 [2024-07-26 14:24:35.876397] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:16.219 [2024-07-26 14:24:35.876411] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:16.219 [2024-07-26 14:24:35.876427] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:16.219 [2024-07-26 14:24:35.876437] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:16.219 [2024-07-26 14:24:35.876447] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:16.219 [2024-07-26 14:24:35.876457] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:16.219 [2024-07-26 14:24:35.876468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.219 [2024-07-26 14:24:35.876479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:16.219 [2024-07-26 14:24:35.876490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:19:16.219 [2024-07-26 14:24:35.876500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.219 [2024-07-26 14:24:35.876590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.219 [2024-07-26 14:24:35.876605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:16.219 [2024-07-26 14:24:35.876621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:16.219 [2024-07-26 14:24:35.876631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.219 [2024-07-26 14:24:35.876733] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:16.219 [2024-07-26 14:24:35.876759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:16.219 [2024-07-26 14:24:35.876773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.219 [2024-07-26 14:24:35.876783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:16.219 [2024-07-26 14:24:35.876804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:16.219 [2024-07-26 14:24:35.876824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:16.219 [2024-07-26 14:24:35.876834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.219 [2024-07-26 14:24:35.876853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:16.219 [2024-07-26 14:24:35.876863] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:16.219 [2024-07-26 14:24:35.876872] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:16.219 [2024-07-26 14:24:35.876882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:16.219 [2024-07-26 14:24:35.876892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:16.219 [2024-07-26 14:24:35.876918] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876929] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:16.219 [2024-07-26 14:24:35.876938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:16.219 [2024-07-26 14:24:35.876961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:16.219 [2024-07-26 14:24:35.876981] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:16.219 [2024-07-26 14:24:35.876991] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:16.219 [2024-07-26 14:24:35.877010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877029] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:16.219 [2024-07-26 14:24:35.877038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877048] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:16.219 [2024-07-26 14:24:35.877067] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877086] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:16.219 [2024-07-26 14:24:35.877095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.219 [2024-07-26 14:24:35.877114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:16.219 [2024-07-26 14:24:35.877139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:16.219 [2024-07-26 14:24:35.877149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:16.219 [2024-07-26 14:24:35.877159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:16.219 [2024-07-26 14:24:35.877169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:16.219 [2024-07-26 14:24:35.877179] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:16.219 [2024-07-26 14:24:35.877199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:16.219 [2024-07-26 14:24:35.877208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877217] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:16.219 [2024-07-26 14:24:35.877228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:16.219 [2024-07-26 14:24:35.877239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:16.219 [2024-07-26 14:24:35.877266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:16.219 [2024-07-26 14:24:35.877276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:16.219 [2024-07-26 14:24:35.877287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:16.219 [2024-07-26 14:24:35.877297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:16.219 [2024-07-26 14:24:35.877307] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:16.219 [2024-07-26 14:24:35.877317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:16.219 [2024-07-26 14:24:35.877328] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:16.219 [2024-07-26 14:24:35.877342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.219 [2024-07-26 14:24:35.877354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:16.219 [2024-07-26 14:24:35.877365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:16.219 [2024-07-26 14:24:35.877376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:16.219 [2024-07-26 14:24:35.877394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:16.219 [2024-07-26 14:24:35.877405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:16.219 [2024-07-26 14:24:35.877416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:16.219 [2024-07-26 14:24:35.877427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:16.219 [2024-07-26 14:24:35.877437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:16.219 [2024-07-26 14:24:35.877448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:16.219 [2024-07-26 14:24:35.877459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:16.219 [2024-07-26 14:24:35.877470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:16.219 [2024-07-26 14:24:35.877481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:16.219 [2024-07-26 14:24:35.877506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:16.219 [2024-07-26 14:24:35.877517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:16.219 [2024-07-26 14:24:35.877528] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:16.219 [2024-07-26 14:24:35.877540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:16.220 [2024-07-26 14:24:35.877551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:16.220 [2024-07-26 14:24:35.877562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:16.220 [2024-07-26 14:24:35.877572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:16.220 [2024-07-26 14:24:35.877583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:16.220 [2024-07-26 14:24:35.877595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.877605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:16.220 [2024-07-26 14:24:35.877616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:19:16.220 [2024-07-26 14:24:35.877627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.916678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.916753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.220 [2024-07-26 14:24:35.916794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.958 ms 00:19:16.220 [2024-07-26 14:24:35.916805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.917011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.917032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:16.220 [2024-07-26 14:24:35.917050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:16.220 [2024-07-26 14:24:35.917076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.949689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.949754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.220 [2024-07-26 14:24:35.949787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.549 ms 00:19:16.220 [2024-07-26 14:24:35.949798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.949954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.949973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.220 [2024-07-26 14:24:35.949985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:16.220 [2024-07-26 14:24:35.949995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.950379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.950407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.220 [2024-07-26 14:24:35.950421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:19:16.220 [2024-07-26 14:24:35.950432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.950603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.950631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.220 [2024-07-26 14:24:35.950643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:19:16.220 [2024-07-26 14:24:35.950654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.220 [2024-07-26 14:24:35.964986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.220 [2024-07-26 14:24:35.965037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.220 [2024-07-26 14:24:35.965068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.304 ms 00:19:16.220 [2024-07-26 14:24:35.965079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:35.979833] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:16.479 [2024-07-26 14:24:35.979878] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:16.479 [2024-07-26 14:24:35.979907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:35.979922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:16.479 [2024-07-26 14:24:35.979936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.683 ms 00:19:16.479 [2024-07-26 14:24:35.979947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.005631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.005683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:16.479 [2024-07-26 14:24:36.005714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.519 ms 00:19:16.479 [2024-07-26 14:24:36.005725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.019432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.019482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:16.479 [2024-07-26 14:24:36.019512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.614 ms 00:19:16.479 [2024-07-26 14:24:36.019529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.033603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.033671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:16.479 [2024-07-26 14:24:36.033701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.991 ms 00:19:16.479 [2024-07-26 14:24:36.033711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.034661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.034706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:16.479 [2024-07-26 14:24:36.034735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:19:16.479 [2024-07-26 14:24:36.034745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.098058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.098157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:16.479 [2024-07-26 14:24:36.098192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.281 ms 00:19:16.479 [2024-07-26 14:24:36.098204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.110418] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:16.479 [2024-07-26 14:24:36.122919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.122997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:16.479 [2024-07-26 14:24:36.123030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.558 ms 00:19:16.479 [2024-07-26 14:24:36.123041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.123179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.123199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:16.479 [2024-07-26 14:24:36.123211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:16.479 [2024-07-26 14:24:36.123221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.123299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.123331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:16.479 [2024-07-26 14:24:36.123342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:16.479 [2024-07-26 14:24:36.123368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.123400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.123420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:16.479 [2024-07-26 14:24:36.123431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:16.479 [2024-07-26 14:24:36.123442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.123480] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:16.479 [2024-07-26 14:24:36.123495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.123505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:16.479 [2024-07-26 14:24:36.123516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:16.479 [2024-07-26 14:24:36.123527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.149435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.149493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:16.479 [2024-07-26 14:24:36.149524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.880 ms 00:19:16.479 [2024-07-26 14:24:36.149535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.149633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.479 [2024-07-26 14:24:36.149651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:16.479 [2024-07-26 14:24:36.149663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:16.479 [2024-07-26 14:24:36.149673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.479 [2024-07-26 14:24:36.150872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:16.479 [2024-07-26 14:24:36.154552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.267 ms, result 0 00:19:16.479 [2024-07-26 14:24:36.155459] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.479 [2024-07-26 14:24:36.169753] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:28.206  Copying: 25/256 [MB] (25 MBps) Copying: 48/256 [MB] (22 MBps) Copying: 70/256 [MB] (22 MBps) Copying: 93/256 [MB] (22 MBps) Copying: 115/256 [MB] (22 MBps) Copying: 137/256 [MB] (21 MBps) Copying: 159/256 [MB] (22 MBps) Copying: 182/256 [MB] (22 MBps) Copying: 204/256 [MB] (22 MBps) Copying: 227/256 [MB] (22 MBps) Copying: 249/256 [MB] (22 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-26 14:24:47.847765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:28.206 [2024-07-26 14:24:47.861318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.861377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.206 [2024-07-26 14:24:47.861411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:28.206 [2024-07-26 14:24:47.861422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.861476] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.206 [2024-07-26 14:24:47.864669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.864716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.206 [2024-07-26 14:24:47.864745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.172 ms 00:19:28.206 [2024-07-26 14:24:47.864756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.865080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.865100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.206 [2024-07-26 14:24:47.865113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:19:28.206 [2024-07-26 14:24:47.865124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.868621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.868665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.206 [2024-07-26 14:24:47.868700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.475 ms 00:19:28.206 [2024-07-26 14:24:47.868711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.876762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.876831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.206 [2024-07-26 14:24:47.876861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.027 ms 00:19:28.206 [2024-07-26 14:24:47.876872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.907490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.907546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.206 [2024-07-26 14:24:47.907577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.532 ms 00:19:28.206 [2024-07-26 14:24:47.907587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.924210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.924263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.206 [2024-07-26 14:24:47.924294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.544 ms 00:19:28.206 [2024-07-26 14:24:47.924309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.924462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.924482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.206 [2024-07-26 14:24:47.924510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:28.206 [2024-07-26 14:24:47.924520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.206 [2024-07-26 14:24:47.952899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.206 [2024-07-26 14:24:47.952949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:28.206 [2024-07-26 14:24:47.952980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.342 ms 00:19:28.206 [2024-07-26 14:24:47.952991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.466 [2024-07-26 14:24:47.984966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.466 [2024-07-26 14:24:47.985030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:28.466 [2024-07-26 14:24:47.985077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.828 ms 00:19:28.466 [2024-07-26 14:24:47.985088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.466 [2024-07-26 14:24:48.016877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.466 [2024-07-26 14:24:48.016960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.466 [2024-07-26 14:24:48.016977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.723 ms 00:19:28.466 [2024-07-26 14:24:48.016988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.466 [2024-07-26 14:24:48.047441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.466 [2024-07-26 14:24:48.047495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.466 [2024-07-26 14:24:48.047525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.352 ms 00:19:28.466 [2024-07-26 14:24:48.047544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.466 [2024-07-26 14:24:48.047622] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.466 [2024-07-26 14:24:48.047653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.466 [2024-07-26 14:24:48.047782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.047995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.467 [2024-07-26 14:24:48.048820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.468 [2024-07-26 14:24:48.048896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.468 [2024-07-26 14:24:48.048916] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 49cb4bb5-d6b9-48f8-b17f-5687a932e782 00:19:28.468 [2024-07-26 14:24:48.048930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.468 [2024-07-26 14:24:48.048941] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.468 [2024-07-26 14:24:48.048964] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.468 [2024-07-26 14:24:48.048975] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.468 [2024-07-26 14:24:48.048985] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.468 [2024-07-26 14:24:48.048996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.468 [2024-07-26 14:24:48.049007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.468 [2024-07-26 14:24:48.049017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.468 [2024-07-26 14:24:48.049026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.468 [2024-07-26 14:24:48.049037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.468 [2024-07-26 14:24:48.049048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.468 [2024-07-26 14:24:48.049065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:19:28.468 [2024-07-26 14:24:48.049076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.064363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.468 [2024-07-26 14:24:48.064433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.468 [2024-07-26 14:24:48.064448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.260 ms 00:19:28.468 [2024-07-26 14:24:48.064459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.064964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.468 [2024-07-26 14:24:48.065018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.468 [2024-07-26 14:24:48.065033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:28.468 [2024-07-26 14:24:48.065044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.104364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.468 [2024-07-26 14:24:48.104431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.468 [2024-07-26 14:24:48.104461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.468 [2024-07-26 14:24:48.104471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.104573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.468 [2024-07-26 14:24:48.104592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.468 [2024-07-26 14:24:48.104602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.468 [2024-07-26 14:24:48.104612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.104698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.468 [2024-07-26 14:24:48.104731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.468 [2024-07-26 14:24:48.104743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.468 [2024-07-26 14:24:48.104754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.104777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.468 [2024-07-26 14:24:48.104791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.468 [2024-07-26 14:24:48.104808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.468 [2024-07-26 14:24:48.104818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.468 [2024-07-26 14:24:48.187278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.468 [2024-07-26 14:24:48.187350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.468 [2024-07-26 14:24:48.187382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.468 [2024-07-26 14:24:48.187392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.727 [2024-07-26 14:24:48.259066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.727 [2024-07-26 14:24:48.259138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.727 [2024-07-26 14:24:48.259169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.727 [2024-07-26 14:24:48.259179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.727 [2024-07-26 14:24:48.259254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.727 [2024-07-26 14:24:48.259270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.727 [2024-07-26 14:24:48.259281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.727 [2024-07-26 14:24:48.259291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.727 [2024-07-26 14:24:48.259322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.727 [2024-07-26 14:24:48.259335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.727 [2024-07-26 14:24:48.259345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.727 [2024-07-26 14:24:48.259359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.727 [2024-07-26 14:24:48.259516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.727 [2024-07-26 14:24:48.259534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.727 [2024-07-26 14:24:48.259546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.727 [2024-07-26 14:24:48.259557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.727 [2024-07-26 14:24:48.259606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.727 [2024-07-26 14:24:48.259623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.727 [2024-07-26 14:24:48.259634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.727 [2024-07-26 14:24:48.259645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.728 [2024-07-26 14:24:48.259696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.728 [2024-07-26 14:24:48.259753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.728 [2024-07-26 14:24:48.259767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.728 [2024-07-26 14:24:48.259778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.728 [2024-07-26 14:24:48.259833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.728 [2024-07-26 14:24:48.259856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.728 [2024-07-26 14:24:48.259869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.728 [2024-07-26 14:24:48.259887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.728 [2024-07-26 14:24:48.260101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.773 ms, result 0 00:19:29.664 00:19:29.664 00:19:29.664 14:24:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:30.231 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:30.231 14:24:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79332 00:19:30.231 14:24:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 79332 ']' 00:19:30.231 14:24:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 79332 00:19:30.231 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79332) - No such process 00:19:30.231 Process with pid 79332 is not found 00:19:30.231 14:24:49 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 79332 is not found' 00:19:30.231 00:19:30.231 real 1m9.539s 00:19:30.231 user 1m34.432s 00:19:30.231 sys 0m6.299s 00:19:30.231 14:24:49 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:30.232 14:24:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:30.232 ************************************ 00:19:30.232 END TEST ftl_trim 00:19:30.232 ************************************ 00:19:30.232 14:24:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:30.232 14:24:49 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:30.232 14:24:49 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:30.232 14:24:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:30.232 ************************************ 00:19:30.232 START TEST ftl_restore 00:19:30.232 ************************************ 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:30.232 * Looking for test storage... 00:19:30.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JPC5ybhH2S 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79597 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79597 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 79597 ']' 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.232 14:24:49 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.232 14:24:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:30.491 [2024-07-26 14:24:50.125296] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:30.491 [2024-07-26 14:24:50.125518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79597 ] 00:19:30.750 [2024-07-26 14:24:50.297666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.009 [2024-07-26 14:24:50.514916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.577 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.577 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:31.577 14:24:51 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:31.836 14:24:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:31.836 14:24:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:31.836 14:24:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:31.836 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:19:31.836 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:31.836 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:31.836 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:31.836 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:32.095 { 00:19:32.095 "name": "nvme0n1", 00:19:32.095 "aliases": [ 00:19:32.095 "30fcf83c-4615-4a1d-8851-8b53d2359ee6" 00:19:32.095 ], 00:19:32.095 "product_name": "NVMe disk", 00:19:32.095 "block_size": 4096, 00:19:32.095 "num_blocks": 1310720, 00:19:32.095 "uuid": "30fcf83c-4615-4a1d-8851-8b53d2359ee6", 00:19:32.095 "assigned_rate_limits": { 00:19:32.095 "rw_ios_per_sec": 0, 00:19:32.095 "rw_mbytes_per_sec": 0, 00:19:32.095 "r_mbytes_per_sec": 0, 00:19:32.095 "w_mbytes_per_sec": 0 00:19:32.095 }, 00:19:32.095 "claimed": true, 00:19:32.095 "claim_type": "read_many_write_one", 00:19:32.095 "zoned": false, 00:19:32.095 "supported_io_types": { 00:19:32.095 "read": true, 00:19:32.095 "write": true, 00:19:32.095 "unmap": true, 00:19:32.095 "flush": true, 00:19:32.095 "reset": true, 00:19:32.095 "nvme_admin": true, 00:19:32.095 "nvme_io": true, 00:19:32.095 "nvme_io_md": false, 00:19:32.095 "write_zeroes": true, 00:19:32.095 "zcopy": false, 00:19:32.095 "get_zone_info": false, 00:19:32.095 "zone_management": false, 00:19:32.095 "zone_append": false, 00:19:32.095 "compare": true, 00:19:32.095 "compare_and_write": false, 00:19:32.095 "abort": true, 00:19:32.095 "seek_hole": false, 00:19:32.095 "seek_data": false, 00:19:32.095 "copy": true, 00:19:32.095 "nvme_iov_md": false 00:19:32.095 }, 00:19:32.095 "driver_specific": { 00:19:32.095 "nvme": [ 00:19:32.095 { 00:19:32.095 "pci_address": "0000:00:11.0", 00:19:32.095 "trid": { 00:19:32.095 "trtype": "PCIe", 00:19:32.095 "traddr": "0000:00:11.0" 00:19:32.095 }, 00:19:32.095 "ctrlr_data": { 00:19:32.095 "cntlid": 0, 00:19:32.095 "vendor_id": "0x1b36", 00:19:32.095 "model_number": "QEMU NVMe Ctrl", 00:19:32.095 "serial_number": "12341", 00:19:32.095 "firmware_revision": "8.0.0", 00:19:32.095 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:32.095 "oacs": { 00:19:32.095 "security": 0, 00:19:32.095 "format": 1, 00:19:32.095 "firmware": 0, 00:19:32.095 "ns_manage": 1 00:19:32.095 }, 00:19:32.095 "multi_ctrlr": false, 00:19:32.095 "ana_reporting": false 00:19:32.095 }, 00:19:32.095 "vs": { 00:19:32.095 "nvme_version": "1.4" 00:19:32.095 }, 00:19:32.095 "ns_data": { 00:19:32.095 "id": 1, 00:19:32.095 "can_share": false 00:19:32.095 } 00:19:32.095 } 00:19:32.095 ], 00:19:32.095 "mp_policy": "active_passive" 00:19:32.095 } 00:19:32.095 } 00:19:32.095 ]' 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:19:32.095 14:24:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:19:32.095 14:24:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:32.095 14:24:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:32.095 14:24:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:32.095 14:24:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:32.095 14:24:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:32.354 14:24:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b9b29176-b228-4661-95f8-1f3561874bbd 00:19:32.354 14:24:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:32.354 14:24:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9b29176-b228-4661-95f8-1f3561874bbd 00:19:32.614 14:24:52 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:32.878 14:24:52 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=5c24ad78-8741-4065-b239-a5cc577c2b5b 00:19:32.878 14:24:52 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5c24ad78-8741-4065-b239-a5cc577c2b5b 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:33.137 14:24:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.137 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.137 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.137 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:33.137 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:33.137 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.396 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.396 { 00:19:33.396 "name": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:33.396 "aliases": [ 00:19:33.396 "lvs/nvme0n1p0" 00:19:33.396 ], 00:19:33.396 "product_name": "Logical Volume", 00:19:33.396 "block_size": 4096, 00:19:33.396 "num_blocks": 26476544, 00:19:33.396 "uuid": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:33.396 "assigned_rate_limits": { 00:19:33.396 "rw_ios_per_sec": 0, 00:19:33.396 "rw_mbytes_per_sec": 0, 00:19:33.396 "r_mbytes_per_sec": 0, 00:19:33.396 "w_mbytes_per_sec": 0 00:19:33.396 }, 00:19:33.396 "claimed": false, 00:19:33.396 "zoned": false, 00:19:33.396 "supported_io_types": { 00:19:33.396 "read": true, 00:19:33.396 "write": true, 00:19:33.396 "unmap": true, 00:19:33.396 "flush": false, 00:19:33.396 "reset": true, 00:19:33.396 "nvme_admin": false, 00:19:33.396 "nvme_io": false, 00:19:33.396 "nvme_io_md": false, 00:19:33.396 "write_zeroes": true, 00:19:33.396 "zcopy": false, 00:19:33.396 "get_zone_info": false, 00:19:33.396 "zone_management": false, 00:19:33.396 "zone_append": false, 00:19:33.396 "compare": false, 00:19:33.396 "compare_and_write": false, 00:19:33.396 "abort": false, 00:19:33.396 "seek_hole": true, 00:19:33.396 "seek_data": true, 00:19:33.396 "copy": false, 00:19:33.396 "nvme_iov_md": false 00:19:33.396 }, 00:19:33.396 "driver_specific": { 00:19:33.396 "lvol": { 00:19:33.396 "lvol_store_uuid": "5c24ad78-8741-4065-b239-a5cc577c2b5b", 00:19:33.396 "base_bdev": "nvme0n1", 00:19:33.396 "thin_provision": true, 00:19:33.396 "num_allocated_clusters": 0, 00:19:33.396 "snapshot": false, 00:19:33.396 "clone": false, 00:19:33.396 "esnap_clone": false 00:19:33.396 } 00:19:33.396 } 00:19:33.396 } 00:19:33.396 ]' 00:19:33.396 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.396 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.396 14:24:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.396 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.396 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.396 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.396 14:24:53 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:33.396 14:24:53 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:33.396 14:24:53 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:33.654 14:24:53 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:33.654 14:24:53 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:33.654 14:24:53 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.654 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.654 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:33.654 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:33.654 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:33.654 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:33.913 { 00:19:33.913 "name": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:33.913 "aliases": [ 00:19:33.913 "lvs/nvme0n1p0" 00:19:33.913 ], 00:19:33.913 "product_name": "Logical Volume", 00:19:33.913 "block_size": 4096, 00:19:33.913 "num_blocks": 26476544, 00:19:33.913 "uuid": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:33.913 "assigned_rate_limits": { 00:19:33.913 "rw_ios_per_sec": 0, 00:19:33.913 "rw_mbytes_per_sec": 0, 00:19:33.913 "r_mbytes_per_sec": 0, 00:19:33.913 "w_mbytes_per_sec": 0 00:19:33.913 }, 00:19:33.913 "claimed": false, 00:19:33.913 "zoned": false, 00:19:33.913 "supported_io_types": { 00:19:33.913 "read": true, 00:19:33.913 "write": true, 00:19:33.913 "unmap": true, 00:19:33.913 "flush": false, 00:19:33.913 "reset": true, 00:19:33.913 "nvme_admin": false, 00:19:33.913 "nvme_io": false, 00:19:33.913 "nvme_io_md": false, 00:19:33.913 "write_zeroes": true, 00:19:33.913 "zcopy": false, 00:19:33.913 "get_zone_info": false, 00:19:33.913 "zone_management": false, 00:19:33.913 "zone_append": false, 00:19:33.913 "compare": false, 00:19:33.913 "compare_and_write": false, 00:19:33.913 "abort": false, 00:19:33.913 "seek_hole": true, 00:19:33.913 "seek_data": true, 00:19:33.913 "copy": false, 00:19:33.913 "nvme_iov_md": false 00:19:33.913 }, 00:19:33.913 "driver_specific": { 00:19:33.913 "lvol": { 00:19:33.913 "lvol_store_uuid": "5c24ad78-8741-4065-b239-a5cc577c2b5b", 00:19:33.913 "base_bdev": "nvme0n1", 00:19:33.913 "thin_provision": true, 00:19:33.913 "num_allocated_clusters": 0, 00:19:33.913 "snapshot": false, 00:19:33.913 "clone": false, 00:19:33.913 "esnap_clone": false 00:19:33.913 } 00:19:33.913 } 00:19:33.913 } 00:19:33.913 ]' 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:33.913 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:33.913 14:24:53 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:33.913 14:24:53 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:34.172 14:24:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:34.172 14:24:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:34.172 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:34.172 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:19:34.172 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:19:34.172 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:19:34.172 14:24:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 19a7092a-9595-4a6f-b0c4-64e1331cb197 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:19:34.431 { 00:19:34.431 "name": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:34.431 "aliases": [ 00:19:34.431 "lvs/nvme0n1p0" 00:19:34.431 ], 00:19:34.431 "product_name": "Logical Volume", 00:19:34.431 "block_size": 4096, 00:19:34.431 "num_blocks": 26476544, 00:19:34.431 "uuid": "19a7092a-9595-4a6f-b0c4-64e1331cb197", 00:19:34.431 "assigned_rate_limits": { 00:19:34.431 "rw_ios_per_sec": 0, 00:19:34.431 "rw_mbytes_per_sec": 0, 00:19:34.431 "r_mbytes_per_sec": 0, 00:19:34.431 "w_mbytes_per_sec": 0 00:19:34.431 }, 00:19:34.431 "claimed": false, 00:19:34.431 "zoned": false, 00:19:34.431 "supported_io_types": { 00:19:34.431 "read": true, 00:19:34.431 "write": true, 00:19:34.431 "unmap": true, 00:19:34.431 "flush": false, 00:19:34.431 "reset": true, 00:19:34.431 "nvme_admin": false, 00:19:34.431 "nvme_io": false, 00:19:34.431 "nvme_io_md": false, 00:19:34.431 "write_zeroes": true, 00:19:34.431 "zcopy": false, 00:19:34.431 "get_zone_info": false, 00:19:34.431 "zone_management": false, 00:19:34.431 "zone_append": false, 00:19:34.431 "compare": false, 00:19:34.431 "compare_and_write": false, 00:19:34.431 "abort": false, 00:19:34.431 "seek_hole": true, 00:19:34.431 "seek_data": true, 00:19:34.431 "copy": false, 00:19:34.431 "nvme_iov_md": false 00:19:34.431 }, 00:19:34.431 "driver_specific": { 00:19:34.431 "lvol": { 00:19:34.431 "lvol_store_uuid": "5c24ad78-8741-4065-b239-a5cc577c2b5b", 00:19:34.431 "base_bdev": "nvme0n1", 00:19:34.431 "thin_provision": true, 00:19:34.431 "num_allocated_clusters": 0, 00:19:34.431 "snapshot": false, 00:19:34.431 "clone": false, 00:19:34.431 "esnap_clone": false 00:19:34.431 } 00:19:34.431 } 00:19:34.431 } 00:19:34.431 ]' 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:19:34.431 14:24:54 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 19a7092a-9595-4a6f-b0c4-64e1331cb197 --l2p_dram_limit 10' 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:34.431 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:34.431 14:24:54 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 19a7092a-9595-4a6f-b0c4-64e1331cb197 --l2p_dram_limit 10 -c nvc0n1p0 00:19:34.691 [2024-07-26 14:24:54.370492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.370565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.691 [2024-07-26 14:24:54.370583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:34.691 [2024-07-26 14:24:54.370596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.370670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.370689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.691 [2024-07-26 14:24:54.370701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:34.691 [2024-07-26 14:24:54.370712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.370737] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.691 [2024-07-26 14:24:54.371653] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.691 [2024-07-26 14:24:54.371677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.371693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.691 [2024-07-26 14:24:54.371705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:19:34.691 [2024-07-26 14:24:54.371716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.371885] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:19:34.691 [2024-07-26 14:24:54.372881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.372919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:34.691 [2024-07-26 14:24:54.372937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:34.691 [2024-07-26 14:24:54.372947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.377569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.377619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.691 [2024-07-26 14:24:54.377635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.556 ms 00:19:34.691 [2024-07-26 14:24:54.377646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.377750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.377768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.691 [2024-07-26 14:24:54.377781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:34.691 [2024-07-26 14:24:54.377792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.377870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.377886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.691 [2024-07-26 14:24:54.377903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:34.691 [2024-07-26 14:24:54.377927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.377960] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.691 [2024-07-26 14:24:54.381850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.381900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.691 [2024-07-26 14:24:54.381925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.900 ms 00:19:34.691 [2024-07-26 14:24:54.381938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.381978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.691 [2024-07-26 14:24:54.381994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.691 [2024-07-26 14:24:54.382004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:34.691 [2024-07-26 14:24:54.382015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.691 [2024-07-26 14:24:54.382051] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:34.691 [2024-07-26 14:24:54.382239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.691 [2024-07-26 14:24:54.382256] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.691 [2024-07-26 14:24:54.382274] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:34.691 [2024-07-26 14:24:54.382288] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.691 [2024-07-26 14:24:54.382302] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.691 [2024-07-26 14:24:54.382313] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:34.691 [2024-07-26 14:24:54.382330] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.691 [2024-07-26 14:24:54.382340] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.692 [2024-07-26 14:24:54.382351] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.692 [2024-07-26 14:24:54.382363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.692 [2024-07-26 14:24:54.382375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.692 [2024-07-26 14:24:54.382386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:19:34.692 [2024-07-26 14:24:54.382398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.692 [2024-07-26 14:24:54.382481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.692 [2024-07-26 14:24:54.382497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.692 [2024-07-26 14:24:54.382508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:34.692 [2024-07-26 14:24:54.382522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.692 [2024-07-26 14:24:54.382641] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.692 [2024-07-26 14:24:54.382670] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.692 [2024-07-26 14:24:54.382693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.692 [2024-07-26 14:24:54.382706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.692 [2024-07-26 14:24:54.382729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:34.692 [2024-07-26 14:24:54.382751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.692 [2024-07-26 14:24:54.382762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.692 [2024-07-26 14:24:54.382785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.692 [2024-07-26 14:24:54.382796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:34.692 [2024-07-26 14:24:54.382805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.692 [2024-07-26 14:24:54.382817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.692 [2024-07-26 14:24:54.382827] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:34.692 [2024-07-26 14:24:54.382838] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.692 [2024-07-26 14:24:54.382861] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:34.692 [2024-07-26 14:24:54.382870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382881] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.692 [2024-07-26 14:24:54.382891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:34.692 [2024-07-26 14:24:54.382916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.692 [2024-07-26 14:24:54.382928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.692 [2024-07-26 14:24:54.383072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.692 [2024-07-26 14:24:54.383095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.692 [2024-07-26 14:24:54.383105] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.692 [2024-07-26 14:24:54.383125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.692 [2024-07-26 14:24:54.383137] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.692 [2024-07-26 14:24:54.383157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.692 [2024-07-26 14:24:54.383167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383180] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.692 [2024-07-26 14:24:54.383189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.692 [2024-07-26 14:24:54.383201] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:34.692 [2024-07-26 14:24:54.383211] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.692 [2024-07-26 14:24:54.383222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.692 [2024-07-26 14:24:54.383232] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:34.692 [2024-07-26 14:24:54.383243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.692 [2024-07-26 14:24:54.383278] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:34.692 [2024-07-26 14:24:54.383287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383297] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.692 [2024-07-26 14:24:54.383308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.692 [2024-07-26 14:24:54.383319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.692 [2024-07-26 14:24:54.383329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.692 [2024-07-26 14:24:54.383341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.692 [2024-07-26 14:24:54.383350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.692 [2024-07-26 14:24:54.383363] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.692 [2024-07-26 14:24:54.383372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.692 [2024-07-26 14:24:54.383382] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.692 [2024-07-26 14:24:54.383391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.692 [2024-07-26 14:24:54.383406] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.692 [2024-07-26 14:24:54.383421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:34.692 [2024-07-26 14:24:54.383445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:34.692 [2024-07-26 14:24:54.383456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:34.692 [2024-07-26 14:24:54.383466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:34.692 [2024-07-26 14:24:54.383479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:34.692 [2024-07-26 14:24:54.383491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:34.692 [2024-07-26 14:24:54.383503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:34.692 [2024-07-26 14:24:54.383513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:34.692 [2024-07-26 14:24:54.383525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:34.692 [2024-07-26 14:24:54.383535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:34.692 [2024-07-26 14:24:54.383593] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.692 [2024-07-26 14:24:54.383604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.692 [2024-07-26 14:24:54.383617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.693 [2024-07-26 14:24:54.383628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.693 [2024-07-26 14:24:54.383640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.693 [2024-07-26 14:24:54.383651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.693 [2024-07-26 14:24:54.383664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.693 [2024-07-26 14:24:54.383675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.693 [2024-07-26 14:24:54.383687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:19:34.693 [2024-07-26 14:24:54.383698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.693 [2024-07-26 14:24:54.383799] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:34.693 [2024-07-26 14:24:54.383818] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:36.596 [2024-07-26 14:24:56.283503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.283581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:36.596 [2024-07-26 14:24:56.283602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1899.701 ms 00:19:36.596 [2024-07-26 14:24:56.283614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.312866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.312973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.596 [2024-07-26 14:24:56.313015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.994 ms 00:19:36.596 [2024-07-26 14:24:56.313026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.313240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.313260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.596 [2024-07-26 14:24:56.313280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:36.596 [2024-07-26 14:24:56.313307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.347086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.347137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.596 [2024-07-26 14:24:56.347174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.685 ms 00:19:36.596 [2024-07-26 14:24:56.347185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.347241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.347257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.596 [2024-07-26 14:24:56.347276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.596 [2024-07-26 14:24:56.347288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.347694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.347719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.596 [2024-07-26 14:24:56.347736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:19:36.596 [2024-07-26 14:24:56.347775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.596 [2024-07-26 14:24:56.347979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.596 [2024-07-26 14:24:56.348022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.596 [2024-07-26 14:24:56.348054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:36.596 [2024-07-26 14:24:56.348076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.855 [2024-07-26 14:24:56.365284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.855 [2024-07-26 14:24:56.365327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.856 [2024-07-26 14:24:56.365362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.162 ms 00:19:36.856 [2024-07-26 14:24:56.365373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.377626] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:36.856 [2024-07-26 14:24:56.380384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.380422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.856 [2024-07-26 14:24:56.380454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.917 ms 00:19:36.856 [2024-07-26 14:24:56.380466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.445154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.445225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:36.856 [2024-07-26 14:24:56.445246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.651 ms 00:19:36.856 [2024-07-26 14:24:56.445260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.445481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.445502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.856 [2024-07-26 14:24:56.445515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:19:36.856 [2024-07-26 14:24:56.445530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.477971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.478072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:36.856 [2024-07-26 14:24:56.478094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.373 ms 00:19:36.856 [2024-07-26 14:24:56.478113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.508969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.509067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:36.856 [2024-07-26 14:24:56.509088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.804 ms 00:19:36.856 [2024-07-26 14:24:56.509102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.509950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.510191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.856 [2024-07-26 14:24:56.510227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:19:36.856 [2024-07-26 14:24:56.510244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.856 [2024-07-26 14:24:56.591192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.856 [2024-07-26 14:24:56.591280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:36.856 [2024-07-26 14:24:56.591301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.872 ms 00:19:36.856 [2024-07-26 14:24:56.591317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.620814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.115 [2024-07-26 14:24:56.620879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:37.115 [2024-07-26 14:24:56.620898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.446 ms 00:19:37.115 [2024-07-26 14:24:56.620950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.649163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.115 [2024-07-26 14:24:56.649224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:37.115 [2024-07-26 14:24:56.649241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.145 ms 00:19:37.115 [2024-07-26 14:24:56.649254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.677618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.115 [2024-07-26 14:24:56.677675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.115 [2024-07-26 14:24:56.677692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.318 ms 00:19:37.115 [2024-07-26 14:24:56.677704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.677753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.115 [2024-07-26 14:24:56.677773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.115 [2024-07-26 14:24:56.677786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:37.115 [2024-07-26 14:24:56.677800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.677937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.115 [2024-07-26 14:24:56.677963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.115 [2024-07-26 14:24:56.677976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:37.115 [2024-07-26 14:24:56.678004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.115 [2024-07-26 14:24:56.679390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2308.291 ms, result 0 00:19:37.115 { 00:19:37.115 "name": "ftl0", 00:19:37.115 "uuid": "3656f234-7c64-48c3-9e1b-ff085368fb1b" 00:19:37.115 } 00:19:37.115 14:24:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:37.115 14:24:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:37.374 14:24:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:37.374 14:24:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:37.633 [2024-07-26 14:24:57.214592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.633 [2024-07-26 14:24:57.214644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.633 [2024-07-26 14:24:57.214683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.633 [2024-07-26 14:24:57.214695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.214731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.634 [2024-07-26 14:24:57.217851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.217902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.634 [2024-07-26 14:24:57.217945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.098 ms 00:19:37.634 [2024-07-26 14:24:57.217960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.218237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.218261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.634 [2024-07-26 14:24:57.218284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:37.634 [2024-07-26 14:24:57.218312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.221295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.221344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.634 [2024-07-26 14:24:57.221359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:19:37.634 [2024-07-26 14:24:57.221371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.227011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.227047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.634 [2024-07-26 14:24:57.227076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.579 ms 00:19:37.634 [2024-07-26 14:24:57.227088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.252360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.252400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.634 [2024-07-26 14:24:57.252431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.211 ms 00:19:37.634 [2024-07-26 14:24:57.252443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.269713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.269781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.634 [2024-07-26 14:24:57.269799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.226 ms 00:19:37.634 [2024-07-26 14:24:57.269812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.270120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.270148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.634 [2024-07-26 14:24:57.270164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:19:37.634 [2024-07-26 14:24:57.270178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.299352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.299441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:37.634 [2024-07-26 14:24:57.299457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.144 ms 00:19:37.634 [2024-07-26 14:24:57.299469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.326043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.326116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:37.634 [2024-07-26 14:24:57.326133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.517 ms 00:19:37.634 [2024-07-26 14:24:57.326145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.351064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.351121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.634 [2024-07-26 14:24:57.351137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.877 ms 00:19:37.634 [2024-07-26 14:24:57.351150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.375936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.634 [2024-07-26 14:24:57.376025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.634 [2024-07-26 14:24:57.376043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.699 ms 00:19:37.634 [2024-07-26 14:24:57.376056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.634 [2024-07-26 14:24:57.376122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.634 [2024-07-26 14:24:57.376149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.634 [2024-07-26 14:24:57.376508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.376998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.635 [2024-07-26 14:24:57.377480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.636 [2024-07-26 14:24:57.377501] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.636 [2024-07-26 14:24:57.377513] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:19:37.636 [2024-07-26 14:24:57.377526] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.636 [2024-07-26 14:24:57.377537] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.636 [2024-07-26 14:24:57.377551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.636 [2024-07-26 14:24:57.377562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.636 [2024-07-26 14:24:57.377574] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.636 [2024-07-26 14:24:57.377584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.636 [2024-07-26 14:24:57.377596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.636 [2024-07-26 14:24:57.377606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.636 [2024-07-26 14:24:57.377619] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.636 [2024-07-26 14:24:57.377629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-07-26 14:24:57.377642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.636 [2024-07-26 14:24:57.377653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:19:37.636 [2024-07-26 14:24:57.377668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-07-26 14:24:57.391453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-07-26 14:24:57.391506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.636 [2024-07-26 14:24:57.391521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:19:37.636 [2024-07-26 14:24:57.391533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-07-26 14:24:57.392041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-07-26 14:24:57.392096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.636 [2024-07-26 14:24:57.392116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:19:37.636 [2024-07-26 14:24:57.392145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.435616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.435702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.895 [2024-07-26 14:24:57.435720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.435732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.435834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.435854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.895 [2024-07-26 14:24:57.435870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.435882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.436035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.436060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.895 [2024-07-26 14:24:57.436087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.436110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.436134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.436152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.895 [2024-07-26 14:24:57.436164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.436178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.518931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.519002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.895 [2024-07-26 14:24:57.519020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.519032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.589432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.589521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.895 [2024-07-26 14:24:57.589543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.589556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.589668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.589690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.895 [2024-07-26 14:24:57.589703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.589715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.589792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.589815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.895 [2024-07-26 14:24:57.589828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.589839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.590014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.590039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.895 [2024-07-26 14:24:57.590052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.590064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.590116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.590137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.895 [2024-07-26 14:24:57.590151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.590162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.590210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.590228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.895 [2024-07-26 14:24:57.590240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.590252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.590305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.895 [2024-07-26 14:24:57.590341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.895 [2024-07-26 14:24:57.590368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.895 [2024-07-26 14:24:57.590379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.895 [2024-07-26 14:24:57.590522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.892 ms, result 0 00:19:37.895 true 00:19:37.895 14:24:57 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79597 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 79597 ']' 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 79597 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79597 00:19:37.895 killing process with pid 79597 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79597' 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 79597 00:19:37.895 14:24:57 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 79597 00:19:43.187 14:25:02 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:47.374 262144+0 records in 00:19:47.374 262144+0 records out 00:19:47.374 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.15993 s, 258 MB/s 00:19:47.374 14:25:06 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:48.750 14:25:08 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:48.750 [2024-07-26 14:25:08.444450] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:48.750 [2024-07-26 14:25:08.444587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79823 ] 00:19:49.008 [2024-07-26 14:25:08.606256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.267 [2024-07-26 14:25:08.815212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.537 [2024-07-26 14:25:09.082688] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.538 [2024-07-26 14:25:09.082774] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.538 [2024-07-26 14:25:09.240021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.240077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.538 [2024-07-26 14:25:09.240097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.538 [2024-07-26 14:25:09.240108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.240166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.240183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.538 [2024-07-26 14:25:09.240195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:49.538 [2024-07-26 14:25:09.240208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.240269] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.538 [2024-07-26 14:25:09.241080] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.538 [2024-07-26 14:25:09.241108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.241120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.538 [2024-07-26 14:25:09.241133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.863 ms 00:19:49.538 [2024-07-26 14:25:09.241143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.242326] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.538 [2024-07-26 14:25:09.256192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.256247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.538 [2024-07-26 14:25:09.256279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.867 ms 00:19:49.538 [2024-07-26 14:25:09.256289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.256352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.256372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.538 [2024-07-26 14:25:09.256383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:49.538 [2024-07-26 14:25:09.256392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.261056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.261093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.538 [2024-07-26 14:25:09.261108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.583 ms 00:19:49.538 [2024-07-26 14:25:09.261119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.261208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.261226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.538 [2024-07-26 14:25:09.261237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:49.538 [2024-07-26 14:25:09.261247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.261315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.261331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.538 [2024-07-26 14:25:09.261342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:49.538 [2024-07-26 14:25:09.261352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.261382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.538 [2024-07-26 14:25:09.265872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.265979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.538 [2024-07-26 14:25:09.266006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.497 ms 00:19:49.538 [2024-07-26 14:25:09.266025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.266091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.266117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.538 [2024-07-26 14:25:09.266137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:49.538 [2024-07-26 14:25:09.266156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.266226] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.538 [2024-07-26 14:25:09.266270] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.538 [2024-07-26 14:25:09.266351] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.538 [2024-07-26 14:25:09.266403] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:49.538 [2024-07-26 14:25:09.266528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.538 [2024-07-26 14:25:09.266555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.538 [2024-07-26 14:25:09.266570] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:49.538 [2024-07-26 14:25:09.266585] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.538 [2024-07-26 14:25:09.266598] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.538 [2024-07-26 14:25:09.266610] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:49.538 [2024-07-26 14:25:09.266621] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.538 [2024-07-26 14:25:09.266632] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.538 [2024-07-26 14:25:09.266642] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.538 [2024-07-26 14:25:09.266654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.266671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.538 [2024-07-26 14:25:09.266683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:49.538 [2024-07-26 14:25:09.266694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.266792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.538 [2024-07-26 14:25:09.266808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.538 [2024-07-26 14:25:09.266820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:49.538 [2024-07-26 14:25:09.266830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.538 [2024-07-26 14:25:09.266991] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.538 [2024-07-26 14:25:09.267011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.538 [2024-07-26 14:25:09.267030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267042] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267054] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.538 [2024-07-26 14:25:09.267066] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.538 [2024-07-26 14:25:09.267099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267109] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.538 [2024-07-26 14:25:09.267120] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.538 [2024-07-26 14:25:09.267131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:49.538 [2024-07-26 14:25:09.267141] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.538 [2024-07-26 14:25:09.267152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.538 [2024-07-26 14:25:09.267178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:49.538 [2024-07-26 14:25:09.267188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.538 [2024-07-26 14:25:09.267223] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.538 [2024-07-26 14:25:09.267313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267340] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.538 [2024-07-26 14:25:09.267362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.538 [2024-07-26 14:25:09.267396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.538 [2024-07-26 14:25:09.267428] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.538 [2024-07-26 14:25:09.267449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.538 [2024-07-26 14:25:09.267460] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:49.538 [2024-07-26 14:25:09.267470] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.538 [2024-07-26 14:25:09.267481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.538 [2024-07-26 14:25:09.267491] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:49.538 [2024-07-26 14:25:09.267502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.538 [2024-07-26 14:25:09.267512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.539 [2024-07-26 14:25:09.267523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:49.539 [2024-07-26 14:25:09.267533] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.539 [2024-07-26 14:25:09.267543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.539 [2024-07-26 14:25:09.267554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:49.539 [2024-07-26 14:25:09.267565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.539 [2024-07-26 14:25:09.267575] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.539 [2024-07-26 14:25:09.267586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.539 [2024-07-26 14:25:09.267597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.539 [2024-07-26 14:25:09.267608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.539 [2024-07-26 14:25:09.267620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.539 [2024-07-26 14:25:09.267631] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.539 [2024-07-26 14:25:09.267642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.539 [2024-07-26 14:25:09.267653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.539 [2024-07-26 14:25:09.267663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.539 [2024-07-26 14:25:09.267674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.539 [2024-07-26 14:25:09.267686] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.539 [2024-07-26 14:25:09.267700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:49.539 [2024-07-26 14:25:09.267724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:49.539 [2024-07-26 14:25:09.267736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:49.539 [2024-07-26 14:25:09.267747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:49.539 [2024-07-26 14:25:09.267758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:49.539 [2024-07-26 14:25:09.267770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:49.539 [2024-07-26 14:25:09.267782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:49.539 [2024-07-26 14:25:09.267793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:49.539 [2024-07-26 14:25:09.267816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:49.539 [2024-07-26 14:25:09.267829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:49.539 [2024-07-26 14:25:09.267886] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.539 [2024-07-26 14:25:09.267911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.539 [2024-07-26 14:25:09.267941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.539 [2024-07-26 14:25:09.267953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.539 [2024-07-26 14:25:09.267964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.539 [2024-07-26 14:25:09.267976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.539 [2024-07-26 14:25:09.267988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.539 [2024-07-26 14:25:09.268000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:19:49.539 [2024-07-26 14:25:09.268011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.320264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.320364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.818 [2024-07-26 14:25:09.320399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.190 ms 00:19:49.818 [2024-07-26 14:25:09.320409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.320515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.320530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.818 [2024-07-26 14:25:09.320541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:49.818 [2024-07-26 14:25:09.320551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.357252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.357334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.818 [2024-07-26 14:25:09.357367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.613 ms 00:19:49.818 [2024-07-26 14:25:09.357377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.357442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.357457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.818 [2024-07-26 14:25:09.357469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:49.818 [2024-07-26 14:25:09.357484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.357840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.357857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.818 [2024-07-26 14:25:09.357868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:49.818 [2024-07-26 14:25:09.357877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.358110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.358132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.818 [2024-07-26 14:25:09.358145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:49.818 [2024-07-26 14:25:09.358156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.373812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.373849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.818 [2024-07-26 14:25:09.373881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.623 ms 00:19:49.818 [2024-07-26 14:25:09.373895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.389526] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:49.818 [2024-07-26 14:25:09.389568] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.818 [2024-07-26 14:25:09.389601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.389612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.818 [2024-07-26 14:25:09.389623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.503 ms 00:19:49.818 [2024-07-26 14:25:09.389632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.418237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.418322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.818 [2024-07-26 14:25:09.418358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.563 ms 00:19:49.818 [2024-07-26 14:25:09.418368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.432684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.432719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.818 [2024-07-26 14:25:09.432750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.272 ms 00:19:49.818 [2024-07-26 14:25:09.432761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.446193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.446229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.818 [2024-07-26 14:25:09.446259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:19:49.818 [2024-07-26 14:25:09.446269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.446972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.447008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.818 [2024-07-26 14:25:09.447037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:19:49.818 [2024-07-26 14:25:09.447047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.508802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.508868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.818 [2024-07-26 14:25:09.508903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.731 ms 00:19:49.818 [2024-07-26 14:25:09.508946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.519538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.818 [2024-07-26 14:25:09.521789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.521821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.818 [2024-07-26 14:25:09.521869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.744 ms 00:19:49.818 [2024-07-26 14:25:09.521880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.522021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.522042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:49.818 [2024-07-26 14:25:09.522055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:49.818 [2024-07-26 14:25:09.522066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.522166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.522187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.818 [2024-07-26 14:25:09.522200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:49.818 [2024-07-26 14:25:09.522210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.522239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.522253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.818 [2024-07-26 14:25:09.522264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:49.818 [2024-07-26 14:25:09.522274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.522326] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:49.818 [2024-07-26 14:25:09.522357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.522382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:49.818 [2024-07-26 14:25:09.522412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:49.818 [2024-07-26 14:25:09.522422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.552837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.552876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.818 [2024-07-26 14:25:09.552925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.392 ms 00:19:49.818 [2024-07-26 14:25:09.552967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.553070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.818 [2024-07-26 14:25:09.553094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.818 [2024-07-26 14:25:09.553109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:49.818 [2024-07-26 14:25:09.553120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.818 [2024-07-26 14:25:09.554569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.988 ms, result 0 00:20:33.231  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (24 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (23 MBps) Copying: 165/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 258/1024 [MB] (23 MBps) Copying: 283/1024 [MB] (24 MBps) Copying: 307/1024 [MB] (24 MBps) Copying: 330/1024 [MB] (23 MBps) Copying: 354/1024 [MB] (23 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (23 MBps) Copying: 425/1024 [MB] (24 MBps) Copying: 449/1024 [MB] (23 MBps) Copying: 473/1024 [MB] (23 MBps) Copying: 497/1024 [MB] (24 MBps) Copying: 521/1024 [MB] (23 MBps) Copying: 544/1024 [MB] (23 MBps) Copying: 568/1024 [MB] (23 MBps) Copying: 591/1024 [MB] (23 MBps) Copying: 614/1024 [MB] (23 MBps) Copying: 638/1024 [MB] (23 MBps) Copying: 662/1024 [MB] (24 MBps) Copying: 686/1024 [MB] (23 MBps) Copying: 709/1024 [MB] (23 MBps) Copying: 733/1024 [MB] (23 MBps) Copying: 757/1024 [MB] (23 MBps) Copying: 781/1024 [MB] (23 MBps) Copying: 804/1024 [MB] (23 MBps) Copying: 827/1024 [MB] (23 MBps) Copying: 851/1024 [MB] (23 MBps) Copying: 874/1024 [MB] (23 MBps) Copying: 897/1024 [MB] (22 MBps) Copying: 921/1024 [MB] (23 MBps) Copying: 945/1024 [MB] (24 MBps) Copying: 969/1024 [MB] (24 MBps) Copying: 993/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:25:52.890170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.231 [2024-07-26 14:25:52.890232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.231 [2024-07-26 14:25:52.890252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.231 [2024-07-26 14:25:52.890264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.231 [2024-07-26 14:25:52.890293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.231 [2024-07-26 14:25:52.893369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.231 [2024-07-26 14:25:52.893402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.231 [2024-07-26 14:25:52.893433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:20:33.231 [2024-07-26 14:25:52.893443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.231 [2024-07-26 14:25:52.895225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.231 [2024-07-26 14:25:52.895266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.232 [2024-07-26 14:25:52.895282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:20:33.232 [2024-07-26 14:25:52.895292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.910409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.910448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.232 [2024-07-26 14:25:52.910480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.096 ms 00:20:33.232 [2024-07-26 14:25:52.910490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.916542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.916579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.232 [2024-07-26 14:25:52.916608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.015 ms 00:20:33.232 [2024-07-26 14:25:52.916618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.943488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.943527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.232 [2024-07-26 14:25:52.943559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.814 ms 00:20:33.232 [2024-07-26 14:25:52.943569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.959470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.959507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.232 [2024-07-26 14:25:52.959539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.861 ms 00:20:33.232 [2024-07-26 14:25:52.959549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.959686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.959706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.232 [2024-07-26 14:25:52.959723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:33.232 [2024-07-26 14:25:52.959736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.232 [2024-07-26 14:25:52.986770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.232 [2024-07-26 14:25:52.986823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:33.232 [2024-07-26 14:25:52.986855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.015 ms 00:20:33.232 [2024-07-26 14:25:52.986865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.492 [2024-07-26 14:25:53.015203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.492 [2024-07-26 14:25:53.015239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:33.492 [2024-07-26 14:25:53.015270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.253 ms 00:20:33.492 [2024-07-26 14:25:53.015279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.492 [2024-07-26 14:25:53.041611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.492 [2024-07-26 14:25:53.041648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.492 [2024-07-26 14:25:53.041678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.294 ms 00:20:33.492 [2024-07-26 14:25:53.041703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.492 [2024-07-26 14:25:53.069286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.492 [2024-07-26 14:25:53.069322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.492 [2024-07-26 14:25:53.069353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.505 ms 00:20:33.492 [2024-07-26 14:25:53.069363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.492 [2024-07-26 14:25:53.069401] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.492 [2024-07-26 14:25:53.069422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.492 [2024-07-26 14:25:53.069678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.069972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.493 [2024-07-26 14:25:53.070606] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.493 [2024-07-26 14:25:53.070616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:20:33.493 [2024-07-26 14:25:53.070626] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.493 [2024-07-26 14:25:53.070641] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.493 [2024-07-26 14:25:53.070651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.493 [2024-07-26 14:25:53.070660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.493 [2024-07-26 14:25:53.070669] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.493 [2024-07-26 14:25:53.070679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.493 [2024-07-26 14:25:53.070688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.493 [2024-07-26 14:25:53.070697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.493 [2024-07-26 14:25:53.070705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.494 [2024-07-26 14:25:53.070715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.494 [2024-07-26 14:25:53.070725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.494 [2024-07-26 14:25:53.070735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:20:33.494 [2024-07-26 14:25:53.070749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.085269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.494 [2024-07-26 14:25:53.085318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.494 [2024-07-26 14:25:53.085348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.482 ms 00:20:33.494 [2024-07-26 14:25:53.085369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.085834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.494 [2024-07-26 14:25:53.085855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.494 [2024-07-26 14:25:53.085869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:20:33.494 [2024-07-26 14:25:53.085880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.119187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.494 [2024-07-26 14:25:53.119236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.494 [2024-07-26 14:25:53.119266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.494 [2024-07-26 14:25:53.119276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.119339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.494 [2024-07-26 14:25:53.119353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.494 [2024-07-26 14:25:53.119363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.494 [2024-07-26 14:25:53.119373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.119448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.494 [2024-07-26 14:25:53.119465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.494 [2024-07-26 14:25:53.119476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.494 [2024-07-26 14:25:53.119486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.119505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.494 [2024-07-26 14:25:53.119517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.494 [2024-07-26 14:25:53.119527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.494 [2024-07-26 14:25:53.119536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.494 [2024-07-26 14:25:53.202843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.494 [2024-07-26 14:25:53.202941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.494 [2024-07-26 14:25:53.202960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.494 [2024-07-26 14:25:53.202972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.752 [2024-07-26 14:25:53.284271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.752 [2024-07-26 14:25:53.284425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.752 [2024-07-26 14:25:53.284501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.752 [2024-07-26 14:25:53.284672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.752 [2024-07-26 14:25:53.284760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.752 [2024-07-26 14:25:53.284825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.752 [2024-07-26 14:25:53.284841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.752 [2024-07-26 14:25:53.284851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.752 [2024-07-26 14:25:53.284897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.753 [2024-07-26 14:25:53.284928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.753 [2024-07-26 14:25:53.284939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.753 [2024-07-26 14:25:53.284949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.753 [2024-07-26 14:25:53.285152] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.946 ms, result 0 00:20:34.687 00:20:34.687 00:20:34.687 14:25:54 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:34.946 [2024-07-26 14:25:54.498107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:34.946 [2024-07-26 14:25:54.498281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80281 ] 00:20:34.946 [2024-07-26 14:25:54.668717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.205 [2024-07-26 14:25:54.833764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.464 [2024-07-26 14:25:55.110877] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.464 [2024-07-26 14:25:55.111028] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:35.724 [2024-07-26 14:25:55.267873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.267957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:35.724 [2024-07-26 14:25:55.267978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.724 [2024-07-26 14:25:55.267990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.268056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.268075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:35.724 [2024-07-26 14:25:55.268087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:35.724 [2024-07-26 14:25:55.268103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.268144] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:35.724 [2024-07-26 14:25:55.269049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:35.724 [2024-07-26 14:25:55.269082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.269096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:35.724 [2024-07-26 14:25:55.269107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:20:35.724 [2024-07-26 14:25:55.269117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.270390] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:35.724 [2024-07-26 14:25:55.286228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.286286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:35.724 [2024-07-26 14:25:55.286303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.840 ms 00:20:35.724 [2024-07-26 14:25:55.286313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.286409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.286430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:35.724 [2024-07-26 14:25:55.286441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:35.724 [2024-07-26 14:25:55.286451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.291200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.291241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:35.724 [2024-07-26 14:25:55.291255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:20:35.724 [2024-07-26 14:25:55.291265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.291356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.291374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:35.724 [2024-07-26 14:25:55.291384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:35.724 [2024-07-26 14:25:55.291394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.291450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.291466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:35.724 [2024-07-26 14:25:55.291477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:35.724 [2024-07-26 14:25:55.291486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.291515] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:35.724 [2024-07-26 14:25:55.295870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.296066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:35.724 [2024-07-26 14:25:55.296198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.362 ms 00:20:35.724 [2024-07-26 14:25:55.296263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.296429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.724 [2024-07-26 14:25:55.296483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:35.724 [2024-07-26 14:25:55.296522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:35.724 [2024-07-26 14:25:55.296673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.724 [2024-07-26 14:25:55.296777] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:35.725 [2024-07-26 14:25:55.296950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:35.725 [2024-07-26 14:25:55.297053] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:35.725 [2024-07-26 14:25:55.297164] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:35.725 [2024-07-26 14:25:55.297380] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:35.725 [2024-07-26 14:25:55.297548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:35.725 [2024-07-26 14:25:55.297618] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:35.725 [2024-07-26 14:25:55.297776] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:35.725 [2024-07-26 14:25:55.297838] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:35.725 [2024-07-26 14:25:55.297891] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:35.725 [2024-07-26 14:25:55.298035] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:35.725 [2024-07-26 14:25:55.298084] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:35.725 [2024-07-26 14:25:55.298119] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:35.725 [2024-07-26 14:25:55.298156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-07-26 14:25:55.298246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:35.725 [2024-07-26 14:25:55.298283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:20:35.725 [2024-07-26 14:25:55.298333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-07-26 14:25:55.298513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.725 [2024-07-26 14:25:55.298625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:35.725 [2024-07-26 14:25:55.298742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:35.725 [2024-07-26 14:25:55.298793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.725 [2024-07-26 14:25:55.298944] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:35.725 [2024-07-26 14:25:55.298971] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:35.725 [2024-07-26 14:25:55.298992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:35.725 [2024-07-26 14:25:55.299023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:35.725 [2024-07-26 14:25:55.299053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.725 [2024-07-26 14:25:55.299072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:35.725 [2024-07-26 14:25:55.299082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:35.725 [2024-07-26 14:25:55.299091] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:35.725 [2024-07-26 14:25:55.299100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:35.725 [2024-07-26 14:25:55.299110] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:35.725 [2024-07-26 14:25:55.299119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:35.725 [2024-07-26 14:25:55.299139] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:35.725 [2024-07-26 14:25:55.299193] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:35.725 [2024-07-26 14:25:55.299221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:35.725 [2024-07-26 14:25:55.299249] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299258] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:35.725 [2024-07-26 14:25:55.299276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:35.725 [2024-07-26 14:25:55.299318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.725 [2024-07-26 14:25:55.299336] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:35.725 [2024-07-26 14:25:55.299345] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:35.725 [2024-07-26 14:25:55.299354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:35.725 [2024-07-26 14:25:55.299363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:35.725 [2024-07-26 14:25:55.299372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:35.725 [2024-07-26 14:25:55.299381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:35.725 [2024-07-26 14:25:55.299399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:35.725 [2024-07-26 14:25:55.299408] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299418] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:35.725 [2024-07-26 14:25:55.299428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:35.725 [2024-07-26 14:25:55.299438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:35.725 [2024-07-26 14:25:55.299457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:35.725 [2024-07-26 14:25:55.299466] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:35.725 [2024-07-26 14:25:55.299475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:35.725 [2024-07-26 14:25:55.299484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:35.725 [2024-07-26 14:25:55.299493] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:35.725 [2024-07-26 14:25:55.299502] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:35.725 [2024-07-26 14:25:55.299513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:35.725 [2024-07-26 14:25:55.299526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.725 [2024-07-26 14:25:55.299537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:35.725 [2024-07-26 14:25:55.299547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:35.725 [2024-07-26 14:25:55.299573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:35.725 [2024-07-26 14:25:55.299584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:35.725 [2024-07-26 14:25:55.299594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:35.725 [2024-07-26 14:25:55.299604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:35.725 [2024-07-26 14:25:55.299614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:35.725 [2024-07-26 14:25:55.299624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:35.725 [2024-07-26 14:25:55.299634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:35.725 [2024-07-26 14:25:55.299644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:35.725 [2024-07-26 14:25:55.299654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:35.726 [2024-07-26 14:25:55.299665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:35.726 [2024-07-26 14:25:55.299675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:35.726 [2024-07-26 14:25:55.299686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:35.726 [2024-07-26 14:25:55.299695] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:35.726 [2024-07-26 14:25:55.299706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:35.726 [2024-07-26 14:25:55.299722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:35.726 [2024-07-26 14:25:55.299732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:35.726 [2024-07-26 14:25:55.299743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:35.726 [2024-07-26 14:25:55.299753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:35.726 [2024-07-26 14:25:55.299765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.299776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:35.726 [2024-07-26 14:25:55.299787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:20:35.726 [2024-07-26 14:25:55.299797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.337746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.337805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:35.726 [2024-07-26 14:25:55.337840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.891 ms 00:20:35.726 [2024-07-26 14:25:55.337852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.337992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.338011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:35.726 [2024-07-26 14:25:55.338022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:35.726 [2024-07-26 14:25:55.338031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.370554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.370607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:35.726 [2024-07-26 14:25:55.370640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.412 ms 00:20:35.726 [2024-07-26 14:25:55.370651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.370707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.370722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:35.726 [2024-07-26 14:25:55.370732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:35.726 [2024-07-26 14:25:55.370746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.371169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.371187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:35.726 [2024-07-26 14:25:55.371199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:20:35.726 [2024-07-26 14:25:55.371209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.371413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.371438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:35.726 [2024-07-26 14:25:55.371451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:20:35.726 [2024-07-26 14:25:55.371461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.385614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.385652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:35.726 [2024-07-26 14:25:55.385683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.123 ms 00:20:35.726 [2024-07-26 14:25:55.385697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.399819] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:35.726 [2024-07-26 14:25:55.399859] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:35.726 [2024-07-26 14:25:55.399946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.399961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:35.726 [2024-07-26 14:25:55.399973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.137 ms 00:20:35.726 [2024-07-26 14:25:55.399984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.425287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.425330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:35.726 [2024-07-26 14:25:55.425362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.253 ms 00:20:35.726 [2024-07-26 14:25:55.425371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.439649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.439699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:35.726 [2024-07-26 14:25:55.439712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.235 ms 00:20:35.726 [2024-07-26 14:25:55.439721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.453107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.453156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:35.726 [2024-07-26 14:25:55.453170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.346 ms 00:20:35.726 [2024-07-26 14:25:55.453179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.726 [2024-07-26 14:25:55.453979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.726 [2024-07-26 14:25:55.454023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:35.726 [2024-07-26 14:25:55.454036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:20:35.726 [2024-07-26 14:25:55.454045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-07-26 14:25:55.517278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-07-26 14:25:55.517354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:35.985 [2024-07-26 14:25:55.517371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.206 ms 00:20:35.985 [2024-07-26 14:25:55.517388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-07-26 14:25:55.528424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:35.985 [2024-07-26 14:25:55.530704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-07-26 14:25:55.530746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:35.985 [2024-07-26 14:25:55.530761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.245 ms 00:20:35.985 [2024-07-26 14:25:55.530770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-07-26 14:25:55.530886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.985 [2024-07-26 14:25:55.530905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:35.985 [2024-07-26 14:25:55.530916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:35.985 [2024-07-26 14:25:55.530956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.985 [2024-07-26 14:25:55.531064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-07-26 14:25:55.531081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:35.986 [2024-07-26 14:25:55.531092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:35.986 [2024-07-26 14:25:55.531102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-07-26 14:25:55.531130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-07-26 14:25:55.531158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:35.986 [2024-07-26 14:25:55.531169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:35.986 [2024-07-26 14:25:55.531178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-07-26 14:25:55.531209] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:35.986 [2024-07-26 14:25:55.531224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-07-26 14:25:55.531237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:35.986 [2024-07-26 14:25:55.531247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:35.986 [2024-07-26 14:25:55.531257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-07-26 14:25:55.560512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-07-26 14:25:55.560565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:35.986 [2024-07-26 14:25:55.560597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.231 ms 00:20:35.986 [2024-07-26 14:25:55.560616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-07-26 14:25:55.560713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:35.986 [2024-07-26 14:25:55.560733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:35.986 [2024-07-26 14:25:55.560745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:35.986 [2024-07-26 14:25:55.560756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:35.986 [2024-07-26 14:25:55.562075] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.604 ms, result 0 00:21:19.632  Copying: 22/1024 [MB] (22 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 140/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 232/1024 [MB] (22 MBps) Copying: 255/1024 [MB] (22 MBps) Copying: 278/1024 [MB] (23 MBps) Copying: 302/1024 [MB] (23 MBps) Copying: 326/1024 [MB] (23 MBps) Copying: 350/1024 [MB] (23 MBps) Copying: 374/1024 [MB] (24 MBps) Copying: 399/1024 [MB] (24 MBps) Copying: 423/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 495/1024 [MB] (23 MBps) Copying: 518/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 565/1024 [MB] (22 MBps) Copying: 588/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 659/1024 [MB] (23 MBps) Copying: 683/1024 [MB] (23 MBps) Copying: 707/1024 [MB] (23 MBps) Copying: 731/1024 [MB] (23 MBps) Copying: 754/1024 [MB] (23 MBps) Copying: 779/1024 [MB] (24 MBps) Copying: 803/1024 [MB] (23 MBps) Copying: 827/1024 [MB] (23 MBps) Copying: 850/1024 [MB] (23 MBps) Copying: 875/1024 [MB] (24 MBps) Copying: 899/1024 [MB] (24 MBps) Copying: 923/1024 [MB] (23 MBps) Copying: 947/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (22 MBps) Copying: 993/1024 [MB] (23 MBps) Copying: 1016/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:26:39.184747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.184831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:19.632 [2024-07-26 14:26:39.184868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:19.632 [2024-07-26 14:26:39.184885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.184977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:19.632 [2024-07-26 14:26:39.189969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.190034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:19.632 [2024-07-26 14:26:39.190060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.959 ms 00:21:19.632 [2024-07-26 14:26:39.190088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.190488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.190545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:19.632 [2024-07-26 14:26:39.190578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:19.632 [2024-07-26 14:26:39.190597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.198915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.198990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:19.632 [2024-07-26 14:26:39.199019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.267 ms 00:21:19.632 [2024-07-26 14:26:39.199037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.205274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.205318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:19.632 [2024-07-26 14:26:39.205347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:21:19.632 [2024-07-26 14:26:39.205357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.232853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.232930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:19.632 [2024-07-26 14:26:39.232979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.428 ms 00:21:19.632 [2024-07-26 14:26:39.233002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.248943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.249021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:19.632 [2024-07-26 14:26:39.249052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.881 ms 00:21:19.632 [2024-07-26 14:26:39.249063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.249224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.632 [2024-07-26 14:26:39.249276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:19.632 [2024-07-26 14:26:39.249310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:21:19.632 [2024-07-26 14:26:39.249320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.632 [2024-07-26 14:26:39.278004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.633 [2024-07-26 14:26:39.278057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:19.633 [2024-07-26 14:26:39.278087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.663 ms 00:21:19.633 [2024-07-26 14:26:39.278098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.633 [2024-07-26 14:26:39.305352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.633 [2024-07-26 14:26:39.305405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:19.633 [2024-07-26 14:26:39.305436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.215 ms 00:21:19.633 [2024-07-26 14:26:39.305446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.633 [2024-07-26 14:26:39.332681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.633 [2024-07-26 14:26:39.332733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:19.633 [2024-07-26 14:26:39.332775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.196 ms 00:21:19.633 [2024-07-26 14:26:39.332786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.633 [2024-07-26 14:26:39.359553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.633 [2024-07-26 14:26:39.359605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:19.633 [2024-07-26 14:26:39.359636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.687 ms 00:21:19.633 [2024-07-26 14:26:39.359647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.633 [2024-07-26 14:26:39.359701] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:19.633 [2024-07-26 14:26:39.359724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.359991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:19.633 [2024-07-26 14:26:39.360651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:19.634 [2024-07-26 14:26:39.360981] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:19.634 [2024-07-26 14:26:39.360992] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:21:19.634 [2024-07-26 14:26:39.361010] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:19.634 [2024-07-26 14:26:39.361020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:19.634 [2024-07-26 14:26:39.361030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:19.634 [2024-07-26 14:26:39.361041] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:19.634 [2024-07-26 14:26:39.361051] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:19.634 [2024-07-26 14:26:39.361062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:19.634 [2024-07-26 14:26:39.361072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:19.634 [2024-07-26 14:26:39.361082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:19.634 [2024-07-26 14:26:39.361091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:19.634 [2024-07-26 14:26:39.361102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.634 [2024-07-26 14:26:39.361113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:19.634 [2024-07-26 14:26:39.361129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.402 ms 00:21:19.634 [2024-07-26 14:26:39.361140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.634 [2024-07-26 14:26:39.376296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.634 [2024-07-26 14:26:39.376347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:19.634 [2024-07-26 14:26:39.376406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.117 ms 00:21:19.634 [2024-07-26 14:26:39.376417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.634 [2024-07-26 14:26:39.376888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.634 [2024-07-26 14:26:39.376934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:19.634 [2024-07-26 14:26:39.376949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:21:19.634 [2024-07-26 14:26:39.376960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.410081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.410138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.894 [2024-07-26 14:26:39.410169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.410179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.410238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.410254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.894 [2024-07-26 14:26:39.410263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.410273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.410386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.410435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.894 [2024-07-26 14:26:39.410447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.410457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.410478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.410491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.894 [2024-07-26 14:26:39.410502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.410512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.492820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.492920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.894 [2024-07-26 14:26:39.492939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.492949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.568319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.568411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.894 [2024-07-26 14:26:39.568444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.568454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.568537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.568552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.894 [2024-07-26 14:26:39.568563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.568572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.568663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.568708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.894 [2024-07-26 14:26:39.568725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.568751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.568871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.568895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.894 [2024-07-26 14:26:39.568907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.568917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.568962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.569030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:19.894 [2024-07-26 14:26:39.569045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.569056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.569100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.569122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.894 [2024-07-26 14:26:39.569134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.569145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.569194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:19.894 [2024-07-26 14:26:39.569210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.894 [2024-07-26 14:26:39.569222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:19.894 [2024-07-26 14:26:39.569233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.894 [2024-07-26 14:26:39.569371] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.611 ms, result 0 00:21:20.829 00:21:20.829 00:21:20.829 14:26:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:22.734 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:22.734 14:26:42 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:23.011 [2024-07-26 14:26:42.566355] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:23.011 [2024-07-26 14:26:42.566530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80762 ] 00:21:23.011 [2024-07-26 14:26:42.739942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.278 [2024-07-26 14:26:42.955530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.537 [2024-07-26 14:26:43.234664] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.537 [2024-07-26 14:26:43.234765] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.796 [2024-07-26 14:26:43.392994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.796 [2024-07-26 14:26:43.393064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.796 [2024-07-26 14:26:43.393098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:23.796 [2024-07-26 14:26:43.393108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.796 [2024-07-26 14:26:43.393165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.796 [2024-07-26 14:26:43.393182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.796 [2024-07-26 14:26:43.393194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:23.796 [2024-07-26 14:26:43.393206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.796 [2024-07-26 14:26:43.393253] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.796 [2024-07-26 14:26:43.394168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.796 [2024-07-26 14:26:43.394225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.796 [2024-07-26 14:26:43.394239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.796 [2024-07-26 14:26:43.394250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:21:23.796 [2024-07-26 14:26:43.394260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.796 [2024-07-26 14:26:43.395443] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.796 [2024-07-26 14:26:43.409240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.796 [2024-07-26 14:26:43.409311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.796 [2024-07-26 14:26:43.409344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.799 ms 00:21:23.796 [2024-07-26 14:26:43.409354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.796 [2024-07-26 14:26:43.409420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.796 [2024-07-26 14:26:43.409441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.796 [2024-07-26 14:26:43.409453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:23.796 [2024-07-26 14:26:43.409463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.413852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.413930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.797 [2024-07-26 14:26:43.413955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:21:23.797 [2024-07-26 14:26:43.413965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.414052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.414070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.797 [2024-07-26 14:26:43.414081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:23.797 [2024-07-26 14:26:43.414090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.414194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.414211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.797 [2024-07-26 14:26:43.414222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:23.797 [2024-07-26 14:26:43.414232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.414265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.797 [2024-07-26 14:26:43.418505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.418554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.797 [2024-07-26 14:26:43.418583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.250 ms 00:21:23.797 [2024-07-26 14:26:43.418593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.418649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.418663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.797 [2024-07-26 14:26:43.418690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:23.797 [2024-07-26 14:26:43.418700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.418776] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.797 [2024-07-26 14:26:43.418823] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.797 [2024-07-26 14:26:43.418868] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.797 [2024-07-26 14:26:43.418893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:21:23.797 [2024-07-26 14:26:43.419021] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.797 [2024-07-26 14:26:43.419048] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.797 [2024-07-26 14:26:43.419064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:23.797 [2024-07-26 14:26:43.419087] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419100] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419112] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.797 [2024-07-26 14:26:43.419124] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.797 [2024-07-26 14:26:43.419134] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.797 [2024-07-26 14:26:43.419145] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.797 [2024-07-26 14:26:43.419157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.419173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.797 [2024-07-26 14:26:43.419184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:21:23.797 [2024-07-26 14:26:43.419195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.419293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.797 [2024-07-26 14:26:43.419317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.797 [2024-07-26 14:26:43.419330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:23.797 [2024-07-26 14:26:43.419342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.797 [2024-07-26 14:26:43.419450] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.797 [2024-07-26 14:26:43.419468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.797 [2024-07-26 14:26:43.419500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.797 [2024-07-26 14:26:43.419533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419543] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.797 [2024-07-26 14:26:43.419564] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.797 [2024-07-26 14:26:43.419583] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.797 [2024-07-26 14:26:43.419593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.797 [2024-07-26 14:26:43.419603] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.797 [2024-07-26 14:26:43.419613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.797 [2024-07-26 14:26:43.419639] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.797 [2024-07-26 14:26:43.419650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.797 [2024-07-26 14:26:43.419672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419682] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.797 [2024-07-26 14:26:43.419715] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419726] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.797 [2024-07-26 14:26:43.419746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.797 [2024-07-26 14:26:43.419776] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.797 [2024-07-26 14:26:43.419808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419818] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.797 [2024-07-26 14:26:43.419828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.797 [2024-07-26 14:26:43.419839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419849] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.797 [2024-07-26 14:26:43.419859] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.797 [2024-07-26 14:26:43.419869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.797 [2024-07-26 14:26:43.419880] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.797 [2024-07-26 14:26:43.419890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.797 [2024-07-26 14:26:43.419900] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.797 [2024-07-26 14:26:43.419910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.797 [2024-07-26 14:26:43.419954] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.797 [2024-07-26 14:26:43.419967] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.419978] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.797 [2024-07-26 14:26:43.419989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.797 [2024-07-26 14:26:43.419999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.797 [2024-07-26 14:26:43.420010] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.797 [2024-07-26 14:26:43.420022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.797 [2024-07-26 14:26:43.420033] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.797 [2024-07-26 14:26:43.420044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.797 [2024-07-26 14:26:43.420055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.797 [2024-07-26 14:26:43.420065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.797 [2024-07-26 14:26:43.420076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.797 [2024-07-26 14:26:43.420087] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.797 [2024-07-26 14:26:43.420102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.797 [2024-07-26 14:26:43.420115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.797 [2024-07-26 14:26:43.420126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.798 [2024-07-26 14:26:43.420138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.798 [2024-07-26 14:26:43.420150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.798 [2024-07-26 14:26:43.420161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.798 [2024-07-26 14:26:43.420183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.798 [2024-07-26 14:26:43.420195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.798 [2024-07-26 14:26:43.420206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.798 [2024-07-26 14:26:43.420217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.798 [2024-07-26 14:26:43.420228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.798 [2024-07-26 14:26:43.420299] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.798 [2024-07-26 14:26:43.420325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.798 [2024-07-26 14:26:43.420352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.798 [2024-07-26 14:26:43.420363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.798 [2024-07-26 14:26:43.420373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.798 [2024-07-26 14:26:43.420385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.420412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.798 [2024-07-26 14:26:43.420423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:21:23.798 [2024-07-26 14:26:43.420434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.463252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.463355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.798 [2024-07-26 14:26:43.463393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.719 ms 00:21:23.798 [2024-07-26 14:26:43.463404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.463523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.463538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.798 [2024-07-26 14:26:43.463550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:23.798 [2024-07-26 14:26:43.463575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.495615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.495675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.798 [2024-07-26 14:26:43.495707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.904 ms 00:21:23.798 [2024-07-26 14:26:43.495717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.495773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.495787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.798 [2024-07-26 14:26:43.495798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.798 [2024-07-26 14:26:43.495837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.496312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.496367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.798 [2024-07-26 14:26:43.496395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:21:23.798 [2024-07-26 14:26:43.496406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.496571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.496599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.798 [2024-07-26 14:26:43.496611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:21:23.798 [2024-07-26 14:26:43.496622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.510668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.510735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.798 [2024-07-26 14:26:43.510767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.015 ms 00:21:23.798 [2024-07-26 14:26:43.510782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.525375] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:23.798 [2024-07-26 14:26:43.525448] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:23.798 [2024-07-26 14:26:43.525483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.525495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:23.798 [2024-07-26 14:26:43.525508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.574 ms 00:21:23.798 [2024-07-26 14:26:43.525519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.798 [2024-07-26 14:26:43.557035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.798 [2024-07-26 14:26:43.557097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:24.056 [2024-07-26 14:26:43.557116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.456 ms 00:21:24.056 [2024-07-26 14:26:43.557128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.573720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.573776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:24.056 [2024-07-26 14:26:43.573808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:21:24.056 [2024-07-26 14:26:43.573818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.590103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.590139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:24.056 [2024-07-26 14:26:43.590154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.242 ms 00:21:24.056 [2024-07-26 14:26:43.590164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.591071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.591103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:24.056 [2024-07-26 14:26:43.591132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:24.056 [2024-07-26 14:26:43.591142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.662314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.662391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:24.056 [2024-07-26 14:26:43.662427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.140 ms 00:21:24.056 [2024-07-26 14:26:43.662444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.673534] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:24.056 [2024-07-26 14:26:43.676165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.676200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:24.056 [2024-07-26 14:26:43.676234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.659 ms 00:21:24.056 [2024-07-26 14:26:43.676245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.676399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.676451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:24.056 [2024-07-26 14:26:43.676464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:24.056 [2024-07-26 14:26:43.676475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.676575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.676594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:24.056 [2024-07-26 14:26:43.676606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:24.056 [2024-07-26 14:26:43.676617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.056 [2024-07-26 14:26:43.676648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.056 [2024-07-26 14:26:43.676664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:24.056 [2024-07-26 14:26:43.676676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:24.056 [2024-07-26 14:26:43.676687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.057 [2024-07-26 14:26:43.676726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:24.057 [2024-07-26 14:26:43.676743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.057 [2024-07-26 14:26:43.676758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:24.057 [2024-07-26 14:26:43.676770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:24.057 [2024-07-26 14:26:43.676782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.057 [2024-07-26 14:26:43.705129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.057 [2024-07-26 14:26:43.705187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.057 [2024-07-26 14:26:43.705219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.323 ms 00:21:24.057 [2024-07-26 14:26:43.705237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.057 [2024-07-26 14:26:43.705317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.057 [2024-07-26 14:26:43.705336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.057 [2024-07-26 14:26:43.705347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:24.057 [2024-07-26 14:26:43.705357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.057 [2024-07-26 14:26:43.706652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 313.154 ms, result 0 00:22:08.070  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 167/1024 [MB] (24 MBps) Copying: 191/1024 [MB] (23 MBps) Copying: 215/1024 [MB] (24 MBps) Copying: 239/1024 [MB] (23 MBps) Copying: 263/1024 [MB] (24 MBps) Copying: 286/1024 [MB] (23 MBps) Copying: 311/1024 [MB] (24 MBps) Copying: 335/1024 [MB] (24 MBps) Copying: 359/1024 [MB] (24 MBps) Copying: 383/1024 [MB] (23 MBps) Copying: 407/1024 [MB] (23 MBps) Copying: 432/1024 [MB] (24 MBps) Copying: 456/1024 [MB] (24 MBps) Copying: 480/1024 [MB] (24 MBps) Copying: 504/1024 [MB] (24 MBps) Copying: 528/1024 [MB] (24 MBps) Copying: 552/1024 [MB] (23 MBps) Copying: 576/1024 [MB] (24 MBps) Copying: 600/1024 [MB] (23 MBps) Copying: 623/1024 [MB] (23 MBps) Copying: 647/1024 [MB] (23 MBps) Copying: 670/1024 [MB] (23 MBps) Copying: 693/1024 [MB] (23 MBps) Copying: 717/1024 [MB] (23 MBps) Copying: 740/1024 [MB] (23 MBps) Copying: 764/1024 [MB] (23 MBps) Copying: 788/1024 [MB] (23 MBps) Copying: 812/1024 [MB] (24 MBps) Copying: 836/1024 [MB] (23 MBps) Copying: 860/1024 [MB] (24 MBps) Copying: 885/1024 [MB] (24 MBps) Copying: 908/1024 [MB] (23 MBps) Copying: 932/1024 [MB] (23 MBps) Copying: 956/1024 [MB] (23 MBps) Copying: 979/1024 [MB] (23 MBps) Copying: 1003/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1048532/1048576 [kB] (840 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:27:27.784872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.070 [2024-07-26 14:27:27.784968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:08.070 [2024-07-26 14:27:27.785006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:08.070 [2024-07-26 14:27:27.785018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.070 [2024-07-26 14:27:27.786672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.070 [2024-07-26 14:27:27.793598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.070 [2024-07-26 14:27:27.793648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:08.070 [2024-07-26 14:27:27.793663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.878 ms 00:22:08.070 [2024-07-26 14:27:27.793673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.070 [2024-07-26 14:27:27.804466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.070 [2024-07-26 14:27:27.804518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:08.070 [2024-07-26 14:27:27.804533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.978 ms 00:22:08.070 [2024-07-26 14:27:27.804543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.071 [2024-07-26 14:27:27.825653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.071 [2024-07-26 14:27:27.825720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:08.071 [2024-07-26 14:27:27.825736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.080 ms 00:22:08.071 [2024-07-26 14:27:27.825747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.071 [2024-07-26 14:27:27.832113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.071 [2024-07-26 14:27:27.832158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:08.071 [2024-07-26 14:27:27.832173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.329 ms 00:22:08.071 [2024-07-26 14:27:27.832185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.330 [2024-07-26 14:27:27.859443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.330 [2024-07-26 14:27:27.859491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:08.330 [2024-07-26 14:27:27.859505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.116 ms 00:22:08.330 [2024-07-26 14:27:27.859515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.330 [2024-07-26 14:27:27.875391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.330 [2024-07-26 14:27:27.875443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:08.330 [2024-07-26 14:27:27.875457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.839 ms 00:22:08.330 [2024-07-26 14:27:27.875467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.330 [2024-07-26 14:27:27.986362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.330 [2024-07-26 14:27:27.986469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:08.330 [2024-07-26 14:27:27.986509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.832 ms 00:22:08.330 [2024-07-26 14:27:27.986528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.330 [2024-07-26 14:27:28.028477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.330 [2024-07-26 14:27:28.028560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:08.330 [2024-07-26 14:27:28.028585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.916 ms 00:22:08.330 [2024-07-26 14:27:28.028601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.330 [2024-07-26 14:27:28.067945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.330 [2024-07-26 14:27:28.068029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:08.330 [2024-07-26 14:27:28.068052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.290 ms 00:22:08.330 [2024-07-26 14:27:28.068068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.590 [2024-07-26 14:27:28.096313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.590 [2024-07-26 14:27:28.096392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:08.590 [2024-07-26 14:27:28.096434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.193 ms 00:22:08.590 [2024-07-26 14:27:28.096444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.590 [2024-07-26 14:27:28.123469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.590 [2024-07-26 14:27:28.123517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:08.590 [2024-07-26 14:27:28.123532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.945 ms 00:22:08.590 [2024-07-26 14:27:28.123541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.590 [2024-07-26 14:27:28.123579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:08.590 [2024-07-26 14:27:28.123600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:22:08.590 [2024-07-26 14:27:28.123613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:08.590 [2024-07-26 14:27:28.123704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.123999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:08.591 [2024-07-26 14:27:28.124774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:08.592 [2024-07-26 14:27:28.124785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:08.592 [2024-07-26 14:27:28.124804] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:08.592 [2024-07-26 14:27:28.124815] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:22:08.592 [2024-07-26 14:27:28.124826] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:22:08.592 [2024-07-26 14:27:28.124836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:22:08.592 [2024-07-26 14:27:28.124845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:22:08.592 [2024-07-26 14:27:28.124860] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:22:08.592 [2024-07-26 14:27:28.124870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:08.592 [2024-07-26 14:27:28.124880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:08.592 [2024-07-26 14:27:28.124894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:08.592 [2024-07-26 14:27:28.124903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:08.592 [2024-07-26 14:27:28.124912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:08.592 [2024-07-26 14:27:28.124922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.592 [2024-07-26 14:27:28.124944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:08.592 [2024-07-26 14:27:28.124955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:22:08.592 [2024-07-26 14:27:28.124966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.139765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.592 [2024-07-26 14:27:28.139810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:08.592 [2024-07-26 14:27:28.139835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.759 ms 00:22:08.592 [2024-07-26 14:27:28.139846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.140297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.592 [2024-07-26 14:27:28.140318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:08.592 [2024-07-26 14:27:28.140330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:22:08.592 [2024-07-26 14:27:28.140340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.172375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.172456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.592 [2024-07-26 14:27:28.172476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.172486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.172558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.172572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.592 [2024-07-26 14:27:28.172582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.172592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.172667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.172684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.592 [2024-07-26 14:27:28.172694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.172709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.172728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.172741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.592 [2024-07-26 14:27:28.172750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.172760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.263334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.263391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.592 [2024-07-26 14:27:28.263406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.263421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.334938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.592 [2024-07-26 14:27:28.335032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.592 [2024-07-26 14:27:28.335160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.592 [2024-07-26 14:27:28.335241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.592 [2024-07-26 14:27:28.335384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:08.592 [2024-07-26 14:27:28.335464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.592 [2024-07-26 14:27:28.335535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:08.592 [2024-07-26 14:27:28.335610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.592 [2024-07-26 14:27:28.335620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:08.592 [2024-07-26 14:27:28.335630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.592 [2024-07-26 14:27:28.335748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.820 ms, result 0 00:22:10.496 00:22:10.496 00:22:10.496 14:27:29 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:10.496 [2024-07-26 14:27:29.924218] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:10.496 [2024-07-26 14:27:29.924386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81231 ] 00:22:10.496 [2024-07-26 14:27:30.095841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.496 [2024-07-26 14:27:30.256753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.065 [2024-07-26 14:27:30.527385] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.065 [2024-07-26 14:27:30.527449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:11.065 [2024-07-26 14:27:30.685534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.685600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.065 [2024-07-26 14:27:30.685617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:11.065 [2024-07-26 14:27:30.685629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.685689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.685707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.065 [2024-07-26 14:27:30.685719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:11.065 [2024-07-26 14:27:30.685733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.685767] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.065 [2024-07-26 14:27:30.686605] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.065 [2024-07-26 14:27:30.686638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.686650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.065 [2024-07-26 14:27:30.686662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.881 ms 00:22:11.065 [2024-07-26 14:27:30.686673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.687936] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:11.065 [2024-07-26 14:27:30.702466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.702529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:11.065 [2024-07-26 14:27:30.702557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.547 ms 00:22:11.065 [2024-07-26 14:27:30.702572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.702654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.702675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:11.065 [2024-07-26 14:27:30.702687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:11.065 [2024-07-26 14:27:30.702698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.707086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.707135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.065 [2024-07-26 14:27:30.707149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.284 ms 00:22:11.065 [2024-07-26 14:27:30.707160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.707247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.707265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.065 [2024-07-26 14:27:30.707276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:11.065 [2024-07-26 14:27:30.707286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.707340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.707357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.065 [2024-07-26 14:27:30.707368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:11.065 [2024-07-26 14:27:30.707379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.707410] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.065 [2024-07-26 14:27:30.711372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.711416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.065 [2024-07-26 14:27:30.711429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.970 ms 00:22:11.065 [2024-07-26 14:27:30.711439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.711480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.711494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.065 [2024-07-26 14:27:30.711506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:11.065 [2024-07-26 14:27:30.711516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.711558] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:11.065 [2024-07-26 14:27:30.711588] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:11.065 [2024-07-26 14:27:30.711628] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:11.065 [2024-07-26 14:27:30.711709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:11.065 [2024-07-26 14:27:30.711809] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:11.065 [2024-07-26 14:27:30.711823] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.065 [2024-07-26 14:27:30.711837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:11.065 [2024-07-26 14:27:30.711851] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.065 [2024-07-26 14:27:30.711864] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.065 [2024-07-26 14:27:30.711876] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:11.065 [2024-07-26 14:27:30.711887] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.065 [2024-07-26 14:27:30.711898] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:11.065 [2024-07-26 14:27:30.711908] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:11.065 [2024-07-26 14:27:30.711920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.711935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.065 [2024-07-26 14:27:30.711947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:22:11.065 [2024-07-26 14:27:30.711974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.712094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.065 [2024-07-26 14:27:30.712112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.065 [2024-07-26 14:27:30.712125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:11.065 [2024-07-26 14:27:30.712136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.065 [2024-07-26 14:27:30.712248] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.065 [2024-07-26 14:27:30.712272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.065 [2024-07-26 14:27:30.712293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.065 [2024-07-26 14:27:30.712305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.065 [2024-07-26 14:27:30.712332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.065 [2024-07-26 14:27:30.712360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.065 [2024-07-26 14:27:30.712371] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:11.065 [2024-07-26 14:27:30.712382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.065 [2024-07-26 14:27:30.712392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:11.065 [2024-07-26 14:27:30.712402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.066 [2024-07-26 14:27:30.712413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.066 [2024-07-26 14:27:30.712438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:11.066 [2024-07-26 14:27:30.712464] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.066 [2024-07-26 14:27:30.712475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.066 [2024-07-26 14:27:30.712485] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:11.066 [2024-07-26 14:27:30.712495] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.066 [2024-07-26 14:27:30.712515] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712526] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712536] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.066 [2024-07-26 14:27:30.712558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712569] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712579] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.066 [2024-07-26 14:27:30.712589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712609] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.066 [2024-07-26 14:27:30.712619] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.066 [2024-07-26 14:27:30.712649] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.066 [2024-07-26 14:27:30.712680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712689] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.066 [2024-07-26 14:27:30.712700] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.066 [2024-07-26 14:27:30.712710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:11.066 [2024-07-26 14:27:30.712720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.066 [2024-07-26 14:27:30.712732] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:11.066 [2024-07-26 14:27:30.712743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:11.066 [2024-07-26 14:27:30.712753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:11.066 [2024-07-26 14:27:30.712773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:11.066 [2024-07-26 14:27:30.712782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712792] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.066 [2024-07-26 14:27:30.712803] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.066 [2024-07-26 14:27:30.712814] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.066 [2024-07-26 14:27:30.712850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.066 [2024-07-26 14:27:30.712860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.066 [2024-07-26 14:27:30.712870] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.066 [2024-07-26 14:27:30.712880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.066 [2024-07-26 14:27:30.712890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.066 [2024-07-26 14:27:30.712900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.066 [2024-07-26 14:27:30.712911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.066 [2024-07-26 14:27:30.712924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.712937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:11.066 [2024-07-26 14:27:30.712948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:11.066 [2024-07-26 14:27:30.712974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:11.066 [2024-07-26 14:27:30.712999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:11.066 [2024-07-26 14:27:30.713013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:11.066 [2024-07-26 14:27:30.713024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:11.066 [2024-07-26 14:27:30.713035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:11.066 [2024-07-26 14:27:30.713046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:11.066 [2024-07-26 14:27:30.713057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:11.066 [2024-07-26 14:27:30.713068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:11.066 [2024-07-26 14:27:30.713124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.066 [2024-07-26 14:27:30.713137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.066 [2024-07-26 14:27:30.713164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.066 [2024-07-26 14:27:30.713175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.066 [2024-07-26 14:27:30.713186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.066 [2024-07-26 14:27:30.713199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.713210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.066 [2024-07-26 14:27:30.713222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:22:11.066 [2024-07-26 14:27:30.713233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.760760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.760823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.066 [2024-07-26 14:27:30.760841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.443 ms 00:22:11.066 [2024-07-26 14:27:30.760852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.761033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.761052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.066 [2024-07-26 14:27:30.761068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:11.066 [2024-07-26 14:27:30.761079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.798557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.798617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.066 [2024-07-26 14:27:30.798634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.348 ms 00:22:11.066 [2024-07-26 14:27:30.798645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.798707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.798722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.066 [2024-07-26 14:27:30.798734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.066 [2024-07-26 14:27:30.798749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.799216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.799241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.066 [2024-07-26 14:27:30.799256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:22:11.066 [2024-07-26 14:27:30.799268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.799477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.799495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.066 [2024-07-26 14:27:30.799507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:22:11.066 [2024-07-26 14:27:30.799518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.066 [2024-07-26 14:27:30.815376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.066 [2024-07-26 14:27:30.815441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.066 [2024-07-26 14:27:30.815455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.828 ms 00:22:11.066 [2024-07-26 14:27:30.815469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.832231] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:11.326 [2024-07-26 14:27:30.832288] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:11.326 [2024-07-26 14:27:30.832307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.832320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:11.326 [2024-07-26 14:27:30.832349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.711 ms 00:22:11.326 [2024-07-26 14:27:30.832374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.859215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.859269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:11.326 [2024-07-26 14:27:30.859284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.754 ms 00:22:11.326 [2024-07-26 14:27:30.859295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.872897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.872954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:11.326 [2024-07-26 14:27:30.872968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.560 ms 00:22:11.326 [2024-07-26 14:27:30.872978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.886132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.886181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:11.326 [2024-07-26 14:27:30.886196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.097 ms 00:22:11.326 [2024-07-26 14:27:30.886205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.886918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.886974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:11.326 [2024-07-26 14:27:30.886991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:22:11.326 [2024-07-26 14:27:30.887002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.949731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.949810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:11.326 [2024-07-26 14:27:30.949828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.699 ms 00:22:11.326 [2024-07-26 14:27:30.949845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.960728] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:11.326 [2024-07-26 14:27:30.962974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.963017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.326 [2024-07-26 14:27:30.963031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.052 ms 00:22:11.326 [2024-07-26 14:27:30.963042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.963141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.963160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:11.326 [2024-07-26 14:27:30.963172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:11.326 [2024-07-26 14:27:30.963182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.964750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.964795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:11.326 [2024-07-26 14:27:30.964807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:22:11.326 [2024-07-26 14:27:30.964817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.964850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.964864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:11.326 [2024-07-26 14:27:30.964876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:11.326 [2024-07-26 14:27:30.964885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.964965] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:11.326 [2024-07-26 14:27:30.964983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.964998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:11.326 [2024-07-26 14:27:30.965009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:11.326 [2024-07-26 14:27:30.965019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.991494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.991543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:11.326 [2024-07-26 14:27:30.991559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.448 ms 00:22:11.326 [2024-07-26 14:27:30.991576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.991651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.326 [2024-07-26 14:27:30.991668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:11.326 [2024-07-26 14:27:30.991681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:11.326 [2024-07-26 14:27:30.991691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.326 [2024-07-26 14:27:30.998914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 312.011 ms, result 0 00:22:55.933  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 117/1024 [MB] (23 MBps) Copying: 141/1024 [MB] (24 MBps) Copying: 164/1024 [MB] (23 MBps) Copying: 188/1024 [MB] (23 MBps) Copying: 211/1024 [MB] (23 MBps) Copying: 235/1024 [MB] (23 MBps) Copying: 257/1024 [MB] (22 MBps) Copying: 280/1024 [MB] (22 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (23 MBps) Copying: 351/1024 [MB] (24 MBps) Copying: 375/1024 [MB] (23 MBps) Copying: 399/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (24 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 495/1024 [MB] (23 MBps) Copying: 519/1024 [MB] (23 MBps) Copying: 543/1024 [MB] (23 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 590/1024 [MB] (23 MBps) Copying: 613/1024 [MB] (23 MBps) Copying: 637/1024 [MB] (23 MBps) Copying: 660/1024 [MB] (23 MBps) Copying: 684/1024 [MB] (24 MBps) Copying: 708/1024 [MB] (23 MBps) Copying: 732/1024 [MB] (23 MBps) Copying: 756/1024 [MB] (23 MBps) Copying: 780/1024 [MB] (23 MBps) Copying: 803/1024 [MB] (23 MBps) Copying: 827/1024 [MB] (23 MBps) Copying: 849/1024 [MB] (22 MBps) Copying: 872/1024 [MB] (22 MBps) Copying: 895/1024 [MB] (22 MBps) Copying: 918/1024 [MB] (23 MBps) Copying: 942/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (22 MBps) Copying: 988/1024 [MB] (23 MBps) Copying: 1010/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:28:15.405189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.933 [2024-07-26 14:28:15.405296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.933 [2024-07-26 14:28:15.405317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:55.933 [2024-07-26 14:28:15.405329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.933 [2024-07-26 14:28:15.405374] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.933 [2024-07-26 14:28:15.408743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.408778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.934 [2024-07-26 14:28:15.408795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.347 ms 00:22:55.934 [2024-07-26 14:28:15.408806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.409054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.409079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.934 [2024-07-26 14:28:15.409093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:22:55.934 [2024-07-26 14:28:15.409105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.413474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.413514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.934 [2024-07-26 14:28:15.413530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.343 ms 00:22:55.934 [2024-07-26 14:28:15.413543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.419538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.419571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.934 [2024-07-26 14:28:15.419585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.924 ms 00:22:55.934 [2024-07-26 14:28:15.419595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.448686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.448749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.934 [2024-07-26 14:28:15.448767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.996 ms 00:22:55.934 [2024-07-26 14:28:15.448778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.464805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.464865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.934 [2024-07-26 14:28:15.464888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.979 ms 00:22:55.934 [2024-07-26 14:28:15.464911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.580010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.580134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.934 [2024-07-26 14:28:15.580158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.041 ms 00:22:55.934 [2024-07-26 14:28:15.580187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.606543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.606613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:55.934 [2024-07-26 14:28:15.606631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.330 ms 00:22:55.934 [2024-07-26 14:28:15.606643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.633926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.633975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:55.934 [2024-07-26 14:28:15.633993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.241 ms 00:22:55.934 [2024-07-26 14:28:15.634004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.663421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.663492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.934 [2024-07-26 14:28:15.663509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.374 ms 00:22:55.934 [2024-07-26 14:28:15.663549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.689486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.934 [2024-07-26 14:28:15.689577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.934 [2024-07-26 14:28:15.689609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.834 ms 00:22:55.934 [2024-07-26 14:28:15.689620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.934 [2024-07-26 14:28:15.689726] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.934 [2024-07-26 14:28:15.689761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:55.934 [2024-07-26 14:28:15.689783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.689978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.934 [2024-07-26 14:28:15.690588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.690996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.935 [2024-07-26 14:28:15.691148] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.935 [2024-07-26 14:28:15.691160] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3656f234-7c64-48c3-9e1b-ff085368fb1b 00:22:55.935 [2024-07-26 14:28:15.691171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:55.935 [2024-07-26 14:28:15.691181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 12736 00:22:55.935 [2024-07-26 14:28:15.691191] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 11776 00:22:55.935 [2024-07-26 14:28:15.691211] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0815 00:22:55.935 [2024-07-26 14:28:15.691221] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.935 [2024-07-26 14:28:15.691231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.935 [2024-07-26 14:28:15.691246] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.935 [2024-07-26 14:28:15.691255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.935 [2024-07-26 14:28:15.691264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.935 [2024-07-26 14:28:15.691275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.935 [2024-07-26 14:28:15.691286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.935 [2024-07-26 14:28:15.691298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:22:55.935 [2024-07-26 14:28:15.691309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.194 [2024-07-26 14:28:15.707230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.194 [2024-07-26 14:28:15.707285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:56.194 [2024-07-26 14:28:15.707302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.828 ms 00:22:56.194 [2024-07-26 14:28:15.707328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.194 [2024-07-26 14:28:15.707752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.194 [2024-07-26 14:28:15.707778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:56.194 [2024-07-26 14:28:15.707791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:22:56.194 [2024-07-26 14:28:15.707801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.194 [2024-07-26 14:28:15.737824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.194 [2024-07-26 14:28:15.737884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.194 [2024-07-26 14:28:15.737913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.194 [2024-07-26 14:28:15.737925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.194 [2024-07-26 14:28:15.737978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.194 [2024-07-26 14:28:15.737992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.194 [2024-07-26 14:28:15.738036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.194 [2024-07-26 14:28:15.738062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.194 [2024-07-26 14:28:15.738139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.194 [2024-07-26 14:28:15.738165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.194 [2024-07-26 14:28:15.738177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.194 [2024-07-26 14:28:15.738194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.738216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.738236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.195 [2024-07-26 14:28:15.738248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.738259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.817760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.817840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.195 [2024-07-26 14:28:15.817857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.817872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.195 [2024-07-26 14:28:15.889149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.195 [2024-07-26 14:28:15.889296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.195 [2024-07-26 14:28:15.889380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.195 [2024-07-26 14:28:15.889579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:56.195 [2024-07-26 14:28:15.889673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.195 [2024-07-26 14:28:15.889791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.889864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:56.195 [2024-07-26 14:28:15.889882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.195 [2024-07-26 14:28:15.889893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:56.195 [2024-07-26 14:28:15.889946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.195 [2024-07-26 14:28:15.890081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 484.863 ms, result 0 00:22:57.129 00:22:57.129 00:22:57.130 14:28:16 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:59.031 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:59.031 14:28:18 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:59.031 14:28:18 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:59.031 14:28:18 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79597 00:22:59.290 14:28:18 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 79597 ']' 00:22:59.290 14:28:18 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 79597 00:22:59.290 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (79597) - No such process 00:22:59.290 Process with pid 79597 is not found 00:22:59.290 14:28:18 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 79597 is not found' 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:59.290 Remove shared memory files 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:59.290 14:28:18 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:59.290 00:22:59.290 real 3m29.070s 00:22:59.290 user 3m15.464s 00:22:59.290 sys 0m14.546s 00:22:59.290 14:28:18 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.290 14:28:18 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:59.290 ************************************ 00:22:59.290 END TEST ftl_restore 00:22:59.290 ************************************ 00:22:59.290 14:28:18 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:59.290 14:28:18 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:59.290 14:28:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:59.290 14:28:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:59.290 ************************************ 00:22:59.290 START TEST ftl_dirty_shutdown 00:22:59.290 ************************************ 00:22:59.290 14:28:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:59.549 * Looking for test storage... 00:22:59.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81774 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81774 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 81774 ']' 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.549 14:28:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:59.549 [2024-07-26 14:28:19.180460] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:59.550 [2024-07-26 14:28:19.180604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81774 ] 00:22:59.808 [2024-07-26 14:28:19.350781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.067 [2024-07-26 14:28:19.575861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:00.634 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:00.904 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:01.176 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:01.176 { 00:23:01.176 "name": "nvme0n1", 00:23:01.176 "aliases": [ 00:23:01.176 "cdf7723f-6b06-4545-a1df-076ac4ddd7f9" 00:23:01.176 ], 00:23:01.176 "product_name": "NVMe disk", 00:23:01.176 "block_size": 4096, 00:23:01.176 "num_blocks": 1310720, 00:23:01.176 "uuid": "cdf7723f-6b06-4545-a1df-076ac4ddd7f9", 00:23:01.176 "assigned_rate_limits": { 00:23:01.176 "rw_ios_per_sec": 0, 00:23:01.176 "rw_mbytes_per_sec": 0, 00:23:01.176 "r_mbytes_per_sec": 0, 00:23:01.176 "w_mbytes_per_sec": 0 00:23:01.176 }, 00:23:01.176 "claimed": true, 00:23:01.176 "claim_type": "read_many_write_one", 00:23:01.176 "zoned": false, 00:23:01.176 "supported_io_types": { 00:23:01.176 "read": true, 00:23:01.176 "write": true, 00:23:01.176 "unmap": true, 00:23:01.176 "flush": true, 00:23:01.176 "reset": true, 00:23:01.177 "nvme_admin": true, 00:23:01.177 "nvme_io": true, 00:23:01.177 "nvme_io_md": false, 00:23:01.177 "write_zeroes": true, 00:23:01.177 "zcopy": false, 00:23:01.177 "get_zone_info": false, 00:23:01.177 "zone_management": false, 00:23:01.177 "zone_append": false, 00:23:01.177 "compare": true, 00:23:01.177 "compare_and_write": false, 00:23:01.177 "abort": true, 00:23:01.177 "seek_hole": false, 00:23:01.177 "seek_data": false, 00:23:01.177 "copy": true, 00:23:01.177 "nvme_iov_md": false 00:23:01.177 }, 00:23:01.177 "driver_specific": { 00:23:01.177 "nvme": [ 00:23:01.177 { 00:23:01.177 "pci_address": "0000:00:11.0", 00:23:01.177 "trid": { 00:23:01.177 "trtype": "PCIe", 00:23:01.177 "traddr": "0000:00:11.0" 00:23:01.177 }, 00:23:01.177 "ctrlr_data": { 00:23:01.177 "cntlid": 0, 00:23:01.177 "vendor_id": "0x1b36", 00:23:01.177 "model_number": "QEMU NVMe Ctrl", 00:23:01.177 "serial_number": "12341", 00:23:01.177 "firmware_revision": "8.0.0", 00:23:01.177 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:01.177 "oacs": { 00:23:01.177 "security": 0, 00:23:01.177 "format": 1, 00:23:01.177 "firmware": 0, 00:23:01.177 "ns_manage": 1 00:23:01.177 }, 00:23:01.177 "multi_ctrlr": false, 00:23:01.177 "ana_reporting": false 00:23:01.177 }, 00:23:01.177 "vs": { 00:23:01.177 "nvme_version": "1.4" 00:23:01.177 }, 00:23:01.177 "ns_data": { 00:23:01.177 "id": 1, 00:23:01.177 "can_share": false 00:23:01.177 } 00:23:01.177 } 00:23:01.177 ], 00:23:01.177 "mp_policy": "active_passive" 00:23:01.177 } 00:23:01.177 } 00:23:01.177 ]' 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:01.177 14:28:20 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:01.436 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=5c24ad78-8741-4065-b239-a5cc577c2b5b 00:23:01.436 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:01.436 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c24ad78-8741-4065-b239-a5cc577c2b5b 00:23:01.695 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:01.954 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5aece5a9-3cf6-4732-8124-40cb1c7df842 00:23:01.954 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5aece5a9-3cf6-4732-8124-40cb1c7df842 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:02.213 14:28:21 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.471 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:02.471 { 00:23:02.471 "name": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:02.471 "aliases": [ 00:23:02.471 "lvs/nvme0n1p0" 00:23:02.471 ], 00:23:02.471 "product_name": "Logical Volume", 00:23:02.471 "block_size": 4096, 00:23:02.471 "num_blocks": 26476544, 00:23:02.471 "uuid": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:02.471 "assigned_rate_limits": { 00:23:02.471 "rw_ios_per_sec": 0, 00:23:02.471 "rw_mbytes_per_sec": 0, 00:23:02.471 "r_mbytes_per_sec": 0, 00:23:02.471 "w_mbytes_per_sec": 0 00:23:02.471 }, 00:23:02.471 "claimed": false, 00:23:02.471 "zoned": false, 00:23:02.471 "supported_io_types": { 00:23:02.471 "read": true, 00:23:02.471 "write": true, 00:23:02.471 "unmap": true, 00:23:02.471 "flush": false, 00:23:02.471 "reset": true, 00:23:02.471 "nvme_admin": false, 00:23:02.471 "nvme_io": false, 00:23:02.471 "nvme_io_md": false, 00:23:02.471 "write_zeroes": true, 00:23:02.471 "zcopy": false, 00:23:02.471 "get_zone_info": false, 00:23:02.471 "zone_management": false, 00:23:02.471 "zone_append": false, 00:23:02.471 "compare": false, 00:23:02.471 "compare_and_write": false, 00:23:02.471 "abort": false, 00:23:02.471 "seek_hole": true, 00:23:02.471 "seek_data": true, 00:23:02.471 "copy": false, 00:23:02.471 "nvme_iov_md": false 00:23:02.471 }, 00:23:02.471 "driver_specific": { 00:23:02.471 "lvol": { 00:23:02.471 "lvol_store_uuid": "5aece5a9-3cf6-4732-8124-40cb1c7df842", 00:23:02.471 "base_bdev": "nvme0n1", 00:23:02.471 "thin_provision": true, 00:23:02.471 "num_allocated_clusters": 0, 00:23:02.471 "snapshot": false, 00:23:02.471 "clone": false, 00:23:02.471 "esnap_clone": false 00:23:02.471 } 00:23:02.471 } 00:23:02.471 } 00:23:02.471 ]' 00:23:02.471 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:02.730 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:02.988 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:03.247 { 00:23:03.247 "name": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:03.247 "aliases": [ 00:23:03.247 "lvs/nvme0n1p0" 00:23:03.247 ], 00:23:03.247 "product_name": "Logical Volume", 00:23:03.247 "block_size": 4096, 00:23:03.247 "num_blocks": 26476544, 00:23:03.247 "uuid": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:03.247 "assigned_rate_limits": { 00:23:03.247 "rw_ios_per_sec": 0, 00:23:03.247 "rw_mbytes_per_sec": 0, 00:23:03.247 "r_mbytes_per_sec": 0, 00:23:03.247 "w_mbytes_per_sec": 0 00:23:03.247 }, 00:23:03.247 "claimed": false, 00:23:03.247 "zoned": false, 00:23:03.247 "supported_io_types": { 00:23:03.247 "read": true, 00:23:03.247 "write": true, 00:23:03.247 "unmap": true, 00:23:03.247 "flush": false, 00:23:03.247 "reset": true, 00:23:03.247 "nvme_admin": false, 00:23:03.247 "nvme_io": false, 00:23:03.247 "nvme_io_md": false, 00:23:03.247 "write_zeroes": true, 00:23:03.247 "zcopy": false, 00:23:03.247 "get_zone_info": false, 00:23:03.247 "zone_management": false, 00:23:03.247 "zone_append": false, 00:23:03.247 "compare": false, 00:23:03.247 "compare_and_write": false, 00:23:03.247 "abort": false, 00:23:03.247 "seek_hole": true, 00:23:03.247 "seek_data": true, 00:23:03.247 "copy": false, 00:23:03.247 "nvme_iov_md": false 00:23:03.247 }, 00:23:03.247 "driver_specific": { 00:23:03.247 "lvol": { 00:23:03.247 "lvol_store_uuid": "5aece5a9-3cf6-4732-8124-40cb1c7df842", 00:23:03.247 "base_bdev": "nvme0n1", 00:23:03.247 "thin_provision": true, 00:23:03.247 "num_allocated_clusters": 0, 00:23:03.247 "snapshot": false, 00:23:03.247 "clone": false, 00:23:03.247 "esnap_clone": false 00:23:03.247 } 00:23:03.247 } 00:23:03.247 } 00:23:03.247 ]' 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:03.247 14:28:22 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:23:03.506 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a8fb48a1-3cf8-4e25-841b-07829ee1c03e 00:23:03.764 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:03.764 { 00:23:03.764 "name": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:03.764 "aliases": [ 00:23:03.764 "lvs/nvme0n1p0" 00:23:03.764 ], 00:23:03.764 "product_name": "Logical Volume", 00:23:03.764 "block_size": 4096, 00:23:03.764 "num_blocks": 26476544, 00:23:03.764 "uuid": "a8fb48a1-3cf8-4e25-841b-07829ee1c03e", 00:23:03.764 "assigned_rate_limits": { 00:23:03.764 "rw_ios_per_sec": 0, 00:23:03.764 "rw_mbytes_per_sec": 0, 00:23:03.764 "r_mbytes_per_sec": 0, 00:23:03.764 "w_mbytes_per_sec": 0 00:23:03.764 }, 00:23:03.764 "claimed": false, 00:23:03.764 "zoned": false, 00:23:03.764 "supported_io_types": { 00:23:03.764 "read": true, 00:23:03.764 "write": true, 00:23:03.764 "unmap": true, 00:23:03.764 "flush": false, 00:23:03.764 "reset": true, 00:23:03.764 "nvme_admin": false, 00:23:03.764 "nvme_io": false, 00:23:03.764 "nvme_io_md": false, 00:23:03.764 "write_zeroes": true, 00:23:03.764 "zcopy": false, 00:23:03.764 "get_zone_info": false, 00:23:03.764 "zone_management": false, 00:23:03.764 "zone_append": false, 00:23:03.764 "compare": false, 00:23:03.764 "compare_and_write": false, 00:23:03.764 "abort": false, 00:23:03.764 "seek_hole": true, 00:23:03.764 "seek_data": true, 00:23:03.764 "copy": false, 00:23:03.764 "nvme_iov_md": false 00:23:03.764 }, 00:23:03.764 "driver_specific": { 00:23:03.764 "lvol": { 00:23:03.764 "lvol_store_uuid": "5aece5a9-3cf6-4732-8124-40cb1c7df842", 00:23:03.764 "base_bdev": "nvme0n1", 00:23:03.764 "thin_provision": true, 00:23:03.764 "num_allocated_clusters": 0, 00:23:03.764 "snapshot": false, 00:23:03.764 "clone": false, 00:23:03.764 "esnap_clone": false 00:23:03.764 } 00:23:03.764 } 00:23:03.764 } 00:23:03.764 ]' 00:23:03.764 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:03.764 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:23:03.764 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a8fb48a1-3cf8-4e25-841b-07829ee1c03e --l2p_dram_limit 10' 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:04.024 14:28:23 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a8fb48a1-3cf8-4e25-841b-07829ee1c03e --l2p_dram_limit 10 -c nvc0n1p0 00:23:04.024 [2024-07-26 14:28:23.755797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.024 [2024-07-26 14:28:23.755864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:04.024 [2024-07-26 14:28:23.755901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:04.024 [2024-07-26 14:28:23.755928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.024 [2024-07-26 14:28:23.756028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.024 [2024-07-26 14:28:23.756076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:04.024 [2024-07-26 14:28:23.756089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:04.024 [2024-07-26 14:28:23.756103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.024 [2024-07-26 14:28:23.756134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:04.024 [2024-07-26 14:28:23.757166] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:04.024 [2024-07-26 14:28:23.757218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.024 [2024-07-26 14:28:23.757239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:04.024 [2024-07-26 14:28:23.757253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:23:04.024 [2024-07-26 14:28:23.757266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.757567] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1e7712e9-1e22-48e6-a108-4ae12c2113c1 00:23:04.025 [2024-07-26 14:28:23.758679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.758733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:04.025 [2024-07-26 14:28:23.758768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:04.025 [2024-07-26 14:28:23.758780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.763632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.763689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:04.025 [2024-07-26 14:28:23.763723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.792 ms 00:23:04.025 [2024-07-26 14:28:23.763734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.763848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.763868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:04.025 [2024-07-26 14:28:23.763882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:23:04.025 [2024-07-26 14:28:23.763893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.764023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.764069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:04.025 [2024-07-26 14:28:23.764088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:04.025 [2024-07-26 14:28:23.764099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.764137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:04.025 [2024-07-26 14:28:23.768522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.768565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:04.025 [2024-07-26 14:28:23.768596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.398 ms 00:23:04.025 [2024-07-26 14:28:23.768608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.768651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.768668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:04.025 [2024-07-26 14:28:23.768680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:04.025 [2024-07-26 14:28:23.768692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.768760] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:04.025 [2024-07-26 14:28:23.768913] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:04.025 [2024-07-26 14:28:23.768947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:04.025 [2024-07-26 14:28:23.768968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:04.025 [2024-07-26 14:28:23.768983] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:04.025 [2024-07-26 14:28:23.768997] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769008] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:04.025 [2024-07-26 14:28:23.769025] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:04.025 [2024-07-26 14:28:23.769035] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:04.025 [2024-07-26 14:28:23.769047] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:04.025 [2024-07-26 14:28:23.769058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.769071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:04.025 [2024-07-26 14:28:23.769082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:23:04.025 [2024-07-26 14:28:23.769094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.769180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.025 [2024-07-26 14:28:23.769197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:04.025 [2024-07-26 14:28:23.769209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:04.025 [2024-07-26 14:28:23.769223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.025 [2024-07-26 14:28:23.769322] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:04.025 [2024-07-26 14:28:23.769352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:04.025 [2024-07-26 14:28:23.769376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:04.025 [2024-07-26 14:28:23.769413] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:04.025 [2024-07-26 14:28:23.769445] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769456] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.025 [2024-07-26 14:28:23.769467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:04.025 [2024-07-26 14:28:23.769480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:04.025 [2024-07-26 14:28:23.769490] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:04.025 [2024-07-26 14:28:23.769501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:04.025 [2024-07-26 14:28:23.769511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:04.025 [2024-07-26 14:28:23.769523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:04.025 [2024-07-26 14:28:23.769546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769556] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769568] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:04.025 [2024-07-26 14:28:23.769577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769590] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:04.025 [2024-07-26 14:28:23.769611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:04.025 [2024-07-26 14:28:23.769642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:04.025 [2024-07-26 14:28:23.769674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:04.025 [2024-07-26 14:28:23.769704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.025 [2024-07-26 14:28:23.769727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:04.025 [2024-07-26 14:28:23.769739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:04.025 [2024-07-26 14:28:23.769749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:04.025 [2024-07-26 14:28:23.769761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:04.025 [2024-07-26 14:28:23.769771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:04.025 [2024-07-26 14:28:23.769783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:04.025 [2024-07-26 14:28:23.769805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:04.025 [2024-07-26 14:28:23.769814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769825] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:04.025 [2024-07-26 14:28:23.769836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:04.025 [2024-07-26 14:28:23.769848] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:04.025 [2024-07-26 14:28:23.769871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:04.025 [2024-07-26 14:28:23.769881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:04.025 [2024-07-26 14:28:23.769906] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:04.025 [2024-07-26 14:28:23.769920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:04.025 [2024-07-26 14:28:23.769949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:04.025 [2024-07-26 14:28:23.769960] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:04.025 [2024-07-26 14:28:23.769977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:04.025 [2024-07-26 14:28:23.769993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.025 [2024-07-26 14:28:23.770008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:04.025 [2024-07-26 14:28:23.770019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:04.025 [2024-07-26 14:28:23.770032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:04.026 [2024-07-26 14:28:23.770042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:04.026 [2024-07-26 14:28:23.770055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:04.026 [2024-07-26 14:28:23.770066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:04.026 [2024-07-26 14:28:23.770079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:04.026 [2024-07-26 14:28:23.770091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:04.026 [2024-07-26 14:28:23.770103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:04.026 [2024-07-26 14:28:23.770113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:04.026 [2024-07-26 14:28:23.770175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:04.026 [2024-07-26 14:28:23.770187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:04.026 [2024-07-26 14:28:23.770212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:04.026 [2024-07-26 14:28:23.770225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:04.026 [2024-07-26 14:28:23.770235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:04.026 [2024-07-26 14:28:23.770249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:04.026 [2024-07-26 14:28:23.770261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:04.026 [2024-07-26 14:28:23.770289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:23:04.026 [2024-07-26 14:28:23.770300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:04.026 [2024-07-26 14:28:23.770362] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:04.026 [2024-07-26 14:28:23.770383] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:05.931 [2024-07-26 14:28:25.577489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.577576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:05.932 [2024-07-26 14:28:25.577616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1807.143 ms 00:23:05.932 [2024-07-26 14:28:25.577627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.606120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.606176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.932 [2024-07-26 14:28:25.606214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.183 ms 00:23:05.932 [2024-07-26 14:28:25.606225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.606394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.606412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:05.932 [2024-07-26 14:28:25.606430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:05.932 [2024-07-26 14:28:25.606457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.640248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.640329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.932 [2024-07-26 14:28:25.640366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.700 ms 00:23:05.932 [2024-07-26 14:28:25.640393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.640450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.640465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.932 [2024-07-26 14:28:25.640498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:05.932 [2024-07-26 14:28:25.640509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.640952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.640998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.932 [2024-07-26 14:28:25.641016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:23:05.932 [2024-07-26 14:28:25.641028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.641169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.641198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.932 [2024-07-26 14:28:25.641214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:23:05.932 [2024-07-26 14:28:25.641224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.656758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.656812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.932 [2024-07-26 14:28:25.656846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.504 ms 00:23:05.932 [2024-07-26 14:28:25.656857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.932 [2024-07-26 14:28:25.668699] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:05.932 [2024-07-26 14:28:25.671368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.932 [2024-07-26 14:28:25.671417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:05.932 [2024-07-26 14:28:25.671449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.395 ms 00:23:05.932 [2024-07-26 14:28:25.671461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.737686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.737796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:06.192 [2024-07-26 14:28:25.737816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.186 ms 00:23:06.192 [2024-07-26 14:28:25.737828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.738099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.738146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:06.192 [2024-07-26 14:28:25.738160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:23:06.192 [2024-07-26 14:28:25.738175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.765371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.765430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:06.192 [2024-07-26 14:28:25.765463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.128 ms 00:23:06.192 [2024-07-26 14:28:25.765478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.792569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.792644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:06.192 [2024-07-26 14:28:25.792678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.046 ms 00:23:06.192 [2024-07-26 14:28:25.792690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.793492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.793556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:06.192 [2024-07-26 14:28:25.793589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:23:06.192 [2024-07-26 14:28:25.793601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.877041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.877152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:06.192 [2024-07-26 14:28:25.877173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.379 ms 00:23:06.192 [2024-07-26 14:28:25.877189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.905488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.905567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:06.192 [2024-07-26 14:28:25.905586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.249 ms 00:23:06.192 [2024-07-26 14:28:25.905599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.192 [2024-07-26 14:28:25.936884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.192 [2024-07-26 14:28:25.936958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:06.192 [2024-07-26 14:28:25.936977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.237 ms 00:23:06.192 [2024-07-26 14:28:25.936991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.451 [2024-07-26 14:28:25.969550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.451 [2024-07-26 14:28:25.969625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:06.451 [2024-07-26 14:28:25.969642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.467 ms 00:23:06.451 [2024-07-26 14:28:25.969687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.451 [2024-07-26 14:28:25.969744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.451 [2024-07-26 14:28:25.969767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:06.451 [2024-07-26 14:28:25.969781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:06.451 [2024-07-26 14:28:25.969798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.451 [2024-07-26 14:28:25.969928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:06.451 [2024-07-26 14:28:25.969956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:06.451 [2024-07-26 14:28:25.969970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:06.451 [2024-07-26 14:28:25.969984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.451 [2024-07-26 14:28:25.971270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2214.911 ms, result 0 00:23:06.451 { 00:23:06.451 "name": "ftl0", 00:23:06.451 "uuid": "1e7712e9-1e22-48e6-a108-4ae12c2113c1" 00:23:06.451 } 00:23:06.451 14:28:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:06.451 14:28:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:06.711 14:28:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:06.711 14:28:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:06.711 14:28:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:06.970 /dev/nbd0 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:06.970 1+0 records in 00:23:06.970 1+0 records out 00:23:06.970 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422885 s, 9.7 MB/s 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:23:06.970 14:28:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:06.970 [2024-07-26 14:28:26.683532] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:06.970 [2024-07-26 14:28:26.683684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81911 ] 00:23:07.229 [2024-07-26 14:28:26.857358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.489 [2024-07-26 14:28:27.080764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.119  Copying: 190/1024 [MB] (190 MBps) Copying: 381/1024 [MB] (190 MBps) Copying: 576/1024 [MB] (194 MBps) Copying: 764/1024 [MB] (187 MBps) Copying: 948/1024 [MB] (183 MBps) Copying: 1024/1024 [MB] (average 188 MBps) 00:23:14.119 00:23:14.119 14:28:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:16.652 14:28:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:16.652 [2024-07-26 14:28:35.914042] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:16.653 [2024-07-26 14:28:35.914256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82007 ] 00:23:16.653 [2024-07-26 14:28:36.088002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.653 [2024-07-26 14:28:36.287700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.228  Copying: 12/1024 [MB] (12 MBps) Copying: 25/1024 [MB] (12 MBps) Copying: 38/1024 [MB] (12 MBps) Copying: 52/1024 [MB] (13 MBps) Copying: 65/1024 [MB] (13 MBps) Copying: 78/1024 [MB] (13 MBps) Copying: 91/1024 [MB] (12 MBps) Copying: 104/1024 [MB] (13 MBps) Copying: 119/1024 [MB] (14 MBps) Copying: 134/1024 [MB] (15 MBps) Copying: 149/1024 [MB] (14 MBps) Copying: 164/1024 [MB] (14 MBps) Copying: 178/1024 [MB] (14 MBps) Copying: 193/1024 [MB] (14 MBps) Copying: 208/1024 [MB] (15 MBps) Copying: 223/1024 [MB] (14 MBps) Copying: 238/1024 [MB] (14 MBps) Copying: 253/1024 [MB] (15 MBps) Copying: 268/1024 [MB] (15 MBps) Copying: 283/1024 [MB] (15 MBps) Copying: 298/1024 [MB] (14 MBps) Copying: 313/1024 [MB] (14 MBps) Copying: 328/1024 [MB] (14 MBps) Copying: 342/1024 [MB] (14 MBps) Copying: 357/1024 [MB] (14 MBps) Copying: 372/1024 [MB] (15 MBps) Copying: 388/1024 [MB] (15 MBps) Copying: 403/1024 [MB] (14 MBps) Copying: 418/1024 [MB] (15 MBps) Copying: 433/1024 [MB] (15 MBps) Copying: 448/1024 [MB] (15 MBps) Copying: 464/1024 [MB] (15 MBps) Copying: 478/1024 [MB] (14 MBps) Copying: 493/1024 [MB] (14 MBps) Copying: 509/1024 [MB] (15 MBps) Copying: 524/1024 [MB] (15 MBps) Copying: 539/1024 [MB] (15 MBps) Copying: 554/1024 [MB] (15 MBps) Copying: 569/1024 [MB] (14 MBps) Copying: 584/1024 [MB] (15 MBps) Copying: 599/1024 [MB] (15 MBps) Copying: 614/1024 [MB] (15 MBps) Copying: 629/1024 [MB] (14 MBps) Copying: 644/1024 [MB] (15 MBps) Copying: 659/1024 [MB] (14 MBps) Copying: 674/1024 [MB] (14 MBps) Copying: 689/1024 [MB] (14 MBps) Copying: 704/1024 [MB] (15 MBps) Copying: 719/1024 [MB] (15 MBps) Copying: 734/1024 [MB] (15 MBps) Copying: 749/1024 [MB] (14 MBps) Copying: 764/1024 [MB] (14 MBps) Copying: 779/1024 [MB] (14 MBps) Copying: 794/1024 [MB] (14 MBps) Copying: 808/1024 [MB] (14 MBps) Copying: 823/1024 [MB] (14 MBps) Copying: 838/1024 [MB] (15 MBps) Copying: 854/1024 [MB] (15 MBps) Copying: 869/1024 [MB] (14 MBps) Copying: 884/1024 [MB] (15 MBps) Copying: 899/1024 [MB] (14 MBps) Copying: 914/1024 [MB] (14 MBps) Copying: 929/1024 [MB] (14 MBps) Copying: 944/1024 [MB] (14 MBps) Copying: 959/1024 [MB] (15 MBps) Copying: 974/1024 [MB] (15 MBps) Copying: 989/1024 [MB] (14 MBps) Copying: 1004/1024 [MB] (14 MBps) Copying: 1019/1024 [MB] (15 MBps) Copying: 1024/1024 [MB] (average 14 MBps) 00:24:27.228 00:24:27.228 14:29:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:27.228 14:29:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:27.487 14:29:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:27.746 [2024-07-26 14:29:47.355120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.355191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:27.746 [2024-07-26 14:29:47.355243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:27.746 [2024-07-26 14:29:47.355255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.355293] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:27.746 [2024-07-26 14:29:47.358475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.358528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:27.746 [2024-07-26 14:29:47.358560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.157 ms 00:24:27.746 [2024-07-26 14:29:47.358573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.360520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.360600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:27.746 [2024-07-26 14:29:47.360617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:24:27.746 [2024-07-26 14:29:47.360632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.375699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.375764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:27.746 [2024-07-26 14:29:47.375783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.042 ms 00:24:27.746 [2024-07-26 14:29:47.375797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.382159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.382229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:27.746 [2024-07-26 14:29:47.382245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.318 ms 00:24:27.746 [2024-07-26 14:29:47.382258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.412056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.412141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:27.746 [2024-07-26 14:29:47.412159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.710 ms 00:24:27.746 [2024-07-26 14:29:47.412174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.429344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.429430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:27.746 [2024-07-26 14:29:47.429449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.117 ms 00:24:27.746 [2024-07-26 14:29:47.429463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.429659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.429702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:27.746 [2024-07-26 14:29:47.429716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:27.746 [2024-07-26 14:29:47.429730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.458468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.458545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:27.746 [2024-07-26 14:29:47.458561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.713 ms 00:24:27.746 [2024-07-26 14:29:47.458574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:27.746 [2024-07-26 14:29:47.486539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:27.746 [2024-07-26 14:29:47.486615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:27.746 [2024-07-26 14:29:47.486632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.917 ms 00:24:27.746 [2024-07-26 14:29:47.486645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.006 [2024-07-26 14:29:47.515679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.006 [2024-07-26 14:29:47.515757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.006 [2024-07-26 14:29:47.515774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.988 ms 00:24:28.006 [2024-07-26 14:29:47.515787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.006 [2024-07-26 14:29:47.543250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.006 [2024-07-26 14:29:47.543349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.006 [2024-07-26 14:29:47.543384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.320 ms 00:24:28.006 [2024-07-26 14:29:47.543398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.006 [2024-07-26 14:29:47.543451] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.006 [2024-07-26 14:29:47.543494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.006 [2024-07-26 14:29:47.543830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.543999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.544978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.007 [2024-07-26 14:29:47.545003] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.007 [2024-07-26 14:29:47.545016] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e7712e9-1e22-48e6-a108-4ae12c2113c1 00:24:28.007 [2024-07-26 14:29:47.545034] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:28.008 [2024-07-26 14:29:47.545047] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:28.008 [2024-07-26 14:29:47.545062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:28.008 [2024-07-26 14:29:47.545074] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:28.008 [2024-07-26 14:29:47.545086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.008 [2024-07-26 14:29:47.545098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.008 [2024-07-26 14:29:47.545110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.008 [2024-07-26 14:29:47.545120] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.008 [2024-07-26 14:29:47.545132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.008 [2024-07-26 14:29:47.545143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.008 [2024-07-26 14:29:47.545156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.008 [2024-07-26 14:29:47.545168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:24:28.008 [2024-07-26 14:29:47.545181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.560625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.008 [2024-07-26 14:29:47.560688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.008 [2024-07-26 14:29:47.560720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.376 ms 00:24:28.008 [2024-07-26 14:29:47.560733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.561197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.008 [2024-07-26 14:29:47.561228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.008 [2024-07-26 14:29:47.561243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:24:28.008 [2024-07-26 14:29:47.561256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.611140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.008 [2024-07-26 14:29:47.611235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.008 [2024-07-26 14:29:47.611254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.008 [2024-07-26 14:29:47.611268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.611355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.008 [2024-07-26 14:29:47.611375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.008 [2024-07-26 14:29:47.611388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.008 [2024-07-26 14:29:47.611401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.611553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.008 [2024-07-26 14:29:47.611579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.008 [2024-07-26 14:29:47.611593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.008 [2024-07-26 14:29:47.611607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.611633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.008 [2024-07-26 14:29:47.611653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.008 [2024-07-26 14:29:47.611666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.008 [2024-07-26 14:29:47.611680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.008 [2024-07-26 14:29:47.698802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.008 [2024-07-26 14:29:47.698902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.008 [2024-07-26 14:29:47.698930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.008 [2024-07-26 14:29:47.698944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.267 [2024-07-26 14:29:47.770126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.267 [2024-07-26 14:29:47.770283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.267 [2024-07-26 14:29:47.770461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.267 [2024-07-26 14:29:47.770646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.267 [2024-07-26 14:29:47.770742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.267 [2024-07-26 14:29:47.770834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.770945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.267 [2024-07-26 14:29:47.770971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.267 [2024-07-26 14:29:47.770985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.267 [2024-07-26 14:29:47.770999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.267 [2024-07-26 14:29:47.771165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.016 ms, result 0 00:24:28.267 true 00:24:28.267 14:29:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81774 00:24:28.267 14:29:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81774 00:24:28.267 14:29:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:28.267 [2024-07-26 14:29:47.901396] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:28.267 [2024-07-26 14:29:47.901589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82719 ] 00:24:28.526 [2024-07-26 14:29:48.071712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.526 [2024-07-26 14:29:48.235109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.063  Copying: 199/1024 [MB] (199 MBps) Copying: 396/1024 [MB] (197 MBps) Copying: 589/1024 [MB] (192 MBps) Copying: 784/1024 [MB] (195 MBps) Copying: 981/1024 [MB] (196 MBps) Copying: 1024/1024 [MB] (average 195 MBps) 00:24:35.063 00:24:35.063 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81774 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:35.063 14:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:35.063 [2024-07-26 14:29:54.801987] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:35.063 [2024-07-26 14:29:54.802150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82790 ] 00:24:35.325 [2024-07-26 14:29:54.970661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.583 [2024-07-26 14:29:55.141280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.841 [2024-07-26 14:29:55.421342] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.841 [2024-07-26 14:29:55.421424] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:35.841 [2024-07-26 14:29:55.486976] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:35.841 [2024-07-26 14:29:55.487401] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:35.841 [2024-07-26 14:29:55.487621] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:36.101 [2024-07-26 14:29:55.755611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.755677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:36.101 [2024-07-26 14:29:55.755694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.101 [2024-07-26 14:29:55.755704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.755764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.755784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.101 [2024-07-26 14:29:55.755796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:36.101 [2024-07-26 14:29:55.755805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.755844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:36.101 [2024-07-26 14:29:55.756803] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:36.101 [2024-07-26 14:29:55.756830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.756842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.101 [2024-07-26 14:29:55.756852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:24:36.101 [2024-07-26 14:29:55.756862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.758069] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:36.101 [2024-07-26 14:29:55.773463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.773517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:36.101 [2024-07-26 14:29:55.773540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.396 ms 00:24:36.101 [2024-07-26 14:29:55.773552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.773634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.773653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:36.101 [2024-07-26 14:29:55.773665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:36.101 [2024-07-26 14:29:55.773675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.778735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.778803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.101 [2024-07-26 14:29:55.778817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.941 ms 00:24:36.101 [2024-07-26 14:29:55.778827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.778959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.778979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.101 [2024-07-26 14:29:55.778992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:36.101 [2024-07-26 14:29:55.779003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.779063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.779080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:36.101 [2024-07-26 14:29:55.779096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:36.101 [2024-07-26 14:29:55.779107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.779141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:36.101 [2024-07-26 14:29:55.783526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.783588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.101 [2024-07-26 14:29:55.783602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.395 ms 00:24:36.101 [2024-07-26 14:29:55.783613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.101 [2024-07-26 14:29:55.783667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.101 [2024-07-26 14:29:55.783683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:36.101 [2024-07-26 14:29:55.783709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:36.101 [2024-07-26 14:29:55.783718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.102 [2024-07-26 14:29:55.783797] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:36.102 [2024-07-26 14:29:55.783826] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:36.102 [2024-07-26 14:29:55.783916] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:36.102 [2024-07-26 14:29:55.783952] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:24:36.102 [2024-07-26 14:29:55.784092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:36.102 [2024-07-26 14:29:55.784112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:36.102 [2024-07-26 14:29:55.784126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:24:36.102 [2024-07-26 14:29:55.784141] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784155] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784173] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:36.102 [2024-07-26 14:29:55.784184] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:36.102 [2024-07-26 14:29:55.784195] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:36.102 [2024-07-26 14:29:55.784206] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:36.102 [2024-07-26 14:29:55.784218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.102 [2024-07-26 14:29:55.784229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:36.102 [2024-07-26 14:29:55.784241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:24:36.102 [2024-07-26 14:29:55.784252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.102 [2024-07-26 14:29:55.784347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.102 [2024-07-26 14:29:55.784362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:36.102 [2024-07-26 14:29:55.784380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:36.102 [2024-07-26 14:29:55.784391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.102 [2024-07-26 14:29:55.784527] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:36.102 [2024-07-26 14:29:55.784550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:36.102 [2024-07-26 14:29:55.784563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:36.102 [2024-07-26 14:29:55.784602] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:36.102 [2024-07-26 14:29:55.784632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784641] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.102 [2024-07-26 14:29:55.784650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:36.102 [2024-07-26 14:29:55.784660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:36.102 [2024-07-26 14:29:55.784669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:36.102 [2024-07-26 14:29:55.784678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:36.102 [2024-07-26 14:29:55.784688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:36.102 [2024-07-26 14:29:55.784697] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784720] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:36.102 [2024-07-26 14:29:55.784730] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:36.102 [2024-07-26 14:29:55.784759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:36.102 [2024-07-26 14:29:55.784787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784797] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:36.102 [2024-07-26 14:29:55.784816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784834] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:36.102 [2024-07-26 14:29:55.784843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784853] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:36.102 [2024-07-26 14:29:55.784864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:36.102 [2024-07-26 14:29:55.784874] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784883] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.102 [2024-07-26 14:29:55.784892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:36.102 [2024-07-26 14:29:55.784932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:36.102 [2024-07-26 14:29:55.784942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:36.102 [2024-07-26 14:29:55.784955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:36.102 [2024-07-26 14:29:55.784965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:36.102 [2024-07-26 14:29:55.784973] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.784983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:36.102 [2024-07-26 14:29:55.784992] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:36.102 [2024-07-26 14:29:55.785001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.785010] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:36.102 [2024-07-26 14:29:55.785020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:36.102 [2024-07-26 14:29:55.785030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:36.102 [2024-07-26 14:29:55.785040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:36.102 [2024-07-26 14:29:55.785055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:36.102 [2024-07-26 14:29:55.785065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:36.102 [2024-07-26 14:29:55.785074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:36.102 [2024-07-26 14:29:55.785083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:36.102 [2024-07-26 14:29:55.785092] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:36.102 [2024-07-26 14:29:55.785102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:36.102 [2024-07-26 14:29:55.785113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:36.102 [2024-07-26 14:29:55.785140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:36.102 [2024-07-26 14:29:55.785161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:36.102 [2024-07-26 14:29:55.785187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:36.102 [2024-07-26 14:29:55.785216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:36.102 [2024-07-26 14:29:55.785226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:36.102 [2024-07-26 14:29:55.785237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:36.102 [2024-07-26 14:29:55.785248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:36.102 [2024-07-26 14:29:55.785260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:36.102 [2024-07-26 14:29:55.785271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:36.102 [2024-07-26 14:29:55.785282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:36.102 [2024-07-26 14:29:55.785341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:36.102 [2024-07-26 14:29:55.785353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:36.102 [2024-07-26 14:29:55.785377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:36.102 [2024-07-26 14:29:55.785389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:36.103 [2024-07-26 14:29:55.785400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:36.103 [2024-07-26 14:29:55.785413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.103 [2024-07-26 14:29:55.785424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:36.103 [2024-07-26 14:29:55.785435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:24:36.103 [2024-07-26 14:29:55.785446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.103 [2024-07-26 14:29:55.828404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.103 [2024-07-26 14:29:55.828489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.103 [2024-07-26 14:29:55.828539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.893 ms 00:24:36.103 [2024-07-26 14:29:55.828550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.103 [2024-07-26 14:29:55.828684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.103 [2024-07-26 14:29:55.828716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:36.103 [2024-07-26 14:29:55.828734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:36.103 [2024-07-26 14:29:55.828760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.362 [2024-07-26 14:29:55.866912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.362 [2024-07-26 14:29:55.866983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.362 [2024-07-26 14:29:55.867000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.055 ms 00:24:36.362 [2024-07-26 14:29:55.867027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.362 [2024-07-26 14:29:55.867090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.362 [2024-07-26 14:29:55.867105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.362 [2024-07-26 14:29:55.867118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.362 [2024-07-26 14:29:55.867129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.362 [2024-07-26 14:29:55.867571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.362 [2024-07-26 14:29:55.867613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.362 [2024-07-26 14:29:55.867627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:24:36.362 [2024-07-26 14:29:55.867639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.362 [2024-07-26 14:29:55.867838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.362 [2024-07-26 14:29:55.867858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.362 [2024-07-26 14:29:55.867871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:24:36.362 [2024-07-26 14:29:55.867893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.362 [2024-07-26 14:29:55.884106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.362 [2024-07-26 14:29:55.884146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.362 [2024-07-26 14:29:55.884163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.186 ms 00:24:36.362 [2024-07-26 14:29:55.884175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:55.900386] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:36.363 [2024-07-26 14:29:55.900455] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:36.363 [2024-07-26 14:29:55.900503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:55.900530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:36.363 [2024-07-26 14:29:55.900542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.184 ms 00:24:36.363 [2024-07-26 14:29:55.900552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:55.927295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:55.927363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:36.363 [2024-07-26 14:29:55.927379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.701 ms 00:24:36.363 [2024-07-26 14:29:55.927389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:55.941117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:55.941166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:36.363 [2024-07-26 14:29:55.941181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.677 ms 00:24:36.363 [2024-07-26 14:29:55.941191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:55.954425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:55.954475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:36.363 [2024-07-26 14:29:55.954488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.193 ms 00:24:36.363 [2024-07-26 14:29:55.954497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:55.955275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:55.955335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:36.363 [2024-07-26 14:29:55.955349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:24:36.363 [2024-07-26 14:29:55.955358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.018537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.018616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:36.363 [2024-07-26 14:29:56.018634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.155 ms 00:24:36.363 [2024-07-26 14:29:56.018645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.029847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:36.363 [2024-07-26 14:29:56.032175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.032209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:36.363 [2024-07-26 14:29:56.032226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.457 ms 00:24:36.363 [2024-07-26 14:29:56.032237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.032348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.032386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:36.363 [2024-07-26 14:29:56.032414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:36.363 [2024-07-26 14:29:56.032424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.032558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.032581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:36.363 [2024-07-26 14:29:56.032594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:36.363 [2024-07-26 14:29:56.032604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.032634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.032649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:36.363 [2024-07-26 14:29:56.032666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:36.363 [2024-07-26 14:29:56.032676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.032713] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:36.363 [2024-07-26 14:29:56.032729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.032740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:36.363 [2024-07-26 14:29:56.032751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:36.363 [2024-07-26 14:29:56.032761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.059355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.059409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:36.363 [2024-07-26 14:29:56.059424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.571 ms 00:24:36.363 [2024-07-26 14:29:56.059434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.059505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.363 [2024-07-26 14:29:56.059523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:36.363 [2024-07-26 14:29:56.059535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:36.363 [2024-07-26 14:29:56.059545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.363 [2024-07-26 14:29:56.060920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 304.791 ms, result 0 00:25:21.436  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (24 MBps) Copying: 72/1024 [MB] (24 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (24 MBps) Copying: 144/1024 [MB] (24 MBps) Copying: 170/1024 [MB] (25 MBps) Copying: 194/1024 [MB] (24 MBps) Copying: 218/1024 [MB] (24 MBps) Copying: 242/1024 [MB] (23 MBps) Copying: 265/1024 [MB] (23 MBps) Copying: 288/1024 [MB] (22 MBps) Copying: 311/1024 [MB] (22 MBps) Copying: 334/1024 [MB] (23 MBps) Copying: 358/1024 [MB] (23 MBps) Copying: 382/1024 [MB] (23 MBps) Copying: 405/1024 [MB] (23 MBps) Copying: 428/1024 [MB] (23 MBps) Copying: 451/1024 [MB] (22 MBps) Copying: 473/1024 [MB] (22 MBps) Copying: 496/1024 [MB] (22 MBps) Copying: 519/1024 [MB] (22 MBps) Copying: 541/1024 [MB] (22 MBps) Copying: 564/1024 [MB] (22 MBps) Copying: 587/1024 [MB] (23 MBps) Copying: 610/1024 [MB] (22 MBps) Copying: 633/1024 [MB] (23 MBps) Copying: 656/1024 [MB] (22 MBps) Copying: 679/1024 [MB] (22 MBps) Copying: 701/1024 [MB] (22 MBps) Copying: 724/1024 [MB] (22 MBps) Copying: 747/1024 [MB] (22 MBps) Copying: 770/1024 [MB] (22 MBps) Copying: 793/1024 [MB] (22 MBps) Copying: 816/1024 [MB] (23 MBps) Copying: 838/1024 [MB] (22 MBps) Copying: 861/1024 [MB] (22 MBps) Copying: 884/1024 [MB] (23 MBps) Copying: 908/1024 [MB] (23 MBps) Copying: 932/1024 [MB] (23 MBps) Copying: 955/1024 [MB] (23 MBps) Copying: 979/1024 [MB] (23 MBps) Copying: 1002/1024 [MB] (23 MBps) Copying: 1023/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 22 MBps)[2024-07-26 14:30:40.885320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.436 [2024-07-26 14:30:40.885413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:21.436 [2024-07-26 14:30:40.885445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:21.436 [2024-07-26 14:30:40.885460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.436 [2024-07-26 14:30:40.887599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.436 [2024-07-26 14:30:40.898049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.436 [2024-07-26 14:30:40.898108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:21.436 [2024-07-26 14:30:40.898130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.384 ms 00:25:21.436 [2024-07-26 14:30:40.898144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.436 [2024-07-26 14:30:40.911426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.436 [2024-07-26 14:30:40.911496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:21.436 [2024-07-26 14:30:40.911518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.473 ms 00:25:21.436 [2024-07-26 14:30:40.911532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.436 [2024-07-26 14:30:40.933787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.436 [2024-07-26 14:30:40.933843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:21.436 [2024-07-26 14:30:40.933865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.226 ms 00:25:21.436 [2024-07-26 14:30:40.933880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.436 [2024-07-26 14:30:40.942112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.436 [2024-07-26 14:30:40.942163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:21.437 [2024-07-26 14:30:40.942201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.152 ms 00:25:21.437 [2024-07-26 14:30:40.942215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:40.980423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:40.980485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:21.437 [2024-07-26 14:30:40.980507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.119 ms 00:25:21.437 [2024-07-26 14:30:40.980520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.003676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.003735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:21.437 [2024-07-26 14:30:41.003764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.103 ms 00:25:21.437 [2024-07-26 14:30:41.003778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.081401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.081461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:21.437 [2024-07-26 14:30:41.081490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.565 ms 00:25:21.437 [2024-07-26 14:30:41.081510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.111157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.111225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:21.437 [2024-07-26 14:30:41.111259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.623 ms 00:25:21.437 [2024-07-26 14:30:41.111271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.142139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.142202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:21.437 [2024-07-26 14:30:41.142235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.818 ms 00:25:21.437 [2024-07-26 14:30:41.142246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.169912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.169966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:21.437 [2024-07-26 14:30:41.169997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.609 ms 00:25:21.437 [2024-07-26 14:30:41.170007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.197573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.437 [2024-07-26 14:30:41.197645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:21.437 [2024-07-26 14:30:41.197676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.470 ms 00:25:21.437 [2024-07-26 14:30:41.197687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.437 [2024-07-26 14:30:41.197746] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:21.437 [2024-07-26 14:30:41.197770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 85248 / 261120 wr_cnt: 1 state: open 00:25:21.437 [2024-07-26 14:30:41.197795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.197998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.198010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:21.437 [2024-07-26 14:30:41.198022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:21.698 [2024-07-26 14:30:41.198901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.198929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.198941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.198953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.198965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:21.700 [2024-07-26 14:30:41.199079] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:21.700 [2024-07-26 14:30:41.199091] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e7712e9-1e22-48e6-a108-4ae12c2113c1 00:25:21.700 [2024-07-26 14:30:41.199108] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 85248 00:25:21.700 [2024-07-26 14:30:41.199120] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 86208 00:25:21.700 [2024-07-26 14:30:41.199133] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 85248 00:25:21.700 [2024-07-26 14:30:41.199146] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0113 00:25:21.700 [2024-07-26 14:30:41.199156] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:21.700 [2024-07-26 14:30:41.199167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:21.700 [2024-07-26 14:30:41.199178] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:21.700 [2024-07-26 14:30:41.199187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:21.700 [2024-07-26 14:30:41.199197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:21.700 [2024-07-26 14:30:41.199207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.700 [2024-07-26 14:30:41.199219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:21.700 [2024-07-26 14:30:41.199243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:25:21.700 [2024-07-26 14:30:41.199253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.214471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.700 [2024-07-26 14:30:41.214522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:21.700 [2024-07-26 14:30:41.214553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.155 ms 00:25:21.700 [2024-07-26 14:30:41.214563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.214996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.700 [2024-07-26 14:30:41.215020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:21.700 [2024-07-26 14:30:41.215034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:25:21.700 [2024-07-26 14:30:41.215045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.247610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.247670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.700 [2024-07-26 14:30:41.247701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.247711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.247771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.247786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.700 [2024-07-26 14:30:41.247797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.247807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.247885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.247902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.700 [2024-07-26 14:30:41.247959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.247971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.247994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.248007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.700 [2024-07-26 14:30:41.248018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.248056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.341879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.341951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.700 [2024-07-26 14:30:41.341984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.341994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.415727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.415802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.700 [2024-07-26 14:30:41.415834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.415844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.415973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.415998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.700 [2024-07-26 14:30:41.416010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.416019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.416122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.416139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.700 [2024-07-26 14:30:41.416151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.416162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.416282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.416307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.700 [2024-07-26 14:30:41.416319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.416330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.416394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.416418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:21.700 [2024-07-26 14:30:41.416431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.416456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.700 [2024-07-26 14:30:41.416499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.700 [2024-07-26 14:30:41.416514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.700 [2024-07-26 14:30:41.416531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.700 [2024-07-26 14:30:41.416541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.701 [2024-07-26 14:30:41.416616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.701 [2024-07-26 14:30:41.416638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.701 [2024-07-26 14:30:41.416650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.701 [2024-07-26 14:30:41.416669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.701 [2024-07-26 14:30:41.416808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.079 ms, result 0 00:25:23.078 00:25:23.078 00:25:23.078 14:30:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:25.010 14:30:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:25.010 [2024-07-26 14:30:44.672196] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:25.010 [2024-07-26 14:30:44.672392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83278 ] 00:25:25.269 [2024-07-26 14:30:44.836321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.528 [2024-07-26 14:30:45.067861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.787 [2024-07-26 14:30:45.335927] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.787 [2024-07-26 14:30:45.336076] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:25.787 [2024-07-26 14:30:45.495123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.787 [2024-07-26 14:30:45.495194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:25.787 [2024-07-26 14:30:45.495213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:25.787 [2024-07-26 14:30:45.495224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.787 [2024-07-26 14:30:45.495285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.787 [2024-07-26 14:30:45.495318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.787 [2024-07-26 14:30:45.495329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:25.787 [2024-07-26 14:30:45.495343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.787 [2024-07-26 14:30:45.495375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:25.787 [2024-07-26 14:30:45.496334] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:25.788 [2024-07-26 14:30:45.496431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.496444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.788 [2024-07-26 14:30:45.496456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:25:25.788 [2024-07-26 14:30:45.496466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.497677] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:25.788 [2024-07-26 14:30:45.511770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.511824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:25.788 [2024-07-26 14:30:45.511841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.095 ms 00:25:25.788 [2024-07-26 14:30:45.511852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.511959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.511998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:25.788 [2024-07-26 14:30:45.512011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:25.788 [2024-07-26 14:30:45.512046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.516551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.516604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.788 [2024-07-26 14:30:45.516619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.394 ms 00:25:25.788 [2024-07-26 14:30:45.516629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.516718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.516736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.788 [2024-07-26 14:30:45.516747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:25.788 [2024-07-26 14:30:45.516757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.516816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.516832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:25.788 [2024-07-26 14:30:45.516843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:25.788 [2024-07-26 14:30:45.516852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.516923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:25.788 [2024-07-26 14:30:45.520791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.520840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.788 [2024-07-26 14:30:45.520855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.892 ms 00:25:25.788 [2024-07-26 14:30:45.520869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.520920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.520936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:25.788 [2024-07-26 14:30:45.520948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:25.788 [2024-07-26 14:30:45.520958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.521000] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:25.788 [2024-07-26 14:30:45.521029] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:25.788 [2024-07-26 14:30:45.521079] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:25.788 [2024-07-26 14:30:45.521102] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:25.788 [2024-07-26 14:30:45.521243] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:25.788 [2024-07-26 14:30:45.521258] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:25.788 [2024-07-26 14:30:45.521271] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:25.788 [2024-07-26 14:30:45.521285] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521297] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521309] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:25.788 [2024-07-26 14:30:45.521319] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:25.788 [2024-07-26 14:30:45.521329] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:25.788 [2024-07-26 14:30:45.521339] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:25.788 [2024-07-26 14:30:45.521355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.521366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:25.788 [2024-07-26 14:30:45.521377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:25:25.788 [2024-07-26 14:30:45.521387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.521492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.788 [2024-07-26 14:30:45.521508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:25.788 [2024-07-26 14:30:45.521519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:25:25.788 [2024-07-26 14:30:45.521530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.788 [2024-07-26 14:30:45.521630] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:25.788 [2024-07-26 14:30:45.521651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:25.788 [2024-07-26 14:30:45.521663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:25.788 [2024-07-26 14:30:45.521696] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:25.788 [2024-07-26 14:30:45.521740] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.788 [2024-07-26 14:30:45.521761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:25.788 [2024-07-26 14:30:45.521771] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:25.788 [2024-07-26 14:30:45.521781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:25.788 [2024-07-26 14:30:45.521791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:25.788 [2024-07-26 14:30:45.521801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:25.788 [2024-07-26 14:30:45.521811] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:25.788 [2024-07-26 14:30:45.521830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:25.788 [2024-07-26 14:30:45.521872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521882] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:25.788 [2024-07-26 14:30:45.521902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521911] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:25.788 [2024-07-26 14:30:45.521930] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521940] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:25.788 [2024-07-26 14:30:45.521959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:25.788 [2024-07-26 14:30:45.521968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:25.788 [2024-07-26 14:30:45.521996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:25.788 [2024-07-26 14:30:45.522007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:25.788 [2024-07-26 14:30:45.522017] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.788 [2024-07-26 14:30:45.522026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:25.788 [2024-07-26 14:30:45.522036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:25.788 [2024-07-26 14:30:45.522045] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:25.788 [2024-07-26 14:30:45.522055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:25.788 [2024-07-26 14:30:45.522065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:25.788 [2024-07-26 14:30:45.522074] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.522084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:25.788 [2024-07-26 14:30:45.522094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:25.788 [2024-07-26 14:30:45.522105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.522115] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:25.788 [2024-07-26 14:30:45.522126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:25.788 [2024-07-26 14:30:45.522136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:25.788 [2024-07-26 14:30:45.522146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:25.788 [2024-07-26 14:30:45.522157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:25.788 [2024-07-26 14:30:45.522167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:25.788 [2024-07-26 14:30:45.522176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:25.789 [2024-07-26 14:30:45.522186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:25.789 [2024-07-26 14:30:45.522195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:25.789 [2024-07-26 14:30:45.522205] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:25.789 [2024-07-26 14:30:45.522216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:25.789 [2024-07-26 14:30:45.522229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:25.789 [2024-07-26 14:30:45.522252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:25.789 [2024-07-26 14:30:45.522263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:25.789 [2024-07-26 14:30:45.522273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:25.789 [2024-07-26 14:30:45.522284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:25.789 [2024-07-26 14:30:45.522295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:25.789 [2024-07-26 14:30:45.522305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:25.789 [2024-07-26 14:30:45.522315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:25.789 [2024-07-26 14:30:45.522326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:25.789 [2024-07-26 14:30:45.522336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:25.789 [2024-07-26 14:30:45.522389] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:25.789 [2024-07-26 14:30:45.522406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:25.789 [2024-07-26 14:30:45.522429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:25.789 [2024-07-26 14:30:45.522439] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:25.789 [2024-07-26 14:30:45.522452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:25.789 [2024-07-26 14:30:45.522464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:25.789 [2024-07-26 14:30:45.522475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:25.789 [2024-07-26 14:30:45.522486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.896 ms 00:25:25.789 [2024-07-26 14:30:45.522496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.562242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.562316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.049 [2024-07-26 14:30:45.562335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.664 ms 00:25:26.049 [2024-07-26 14:30:45.562345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.562457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.562474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:26.049 [2024-07-26 14:30:45.562485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:26.049 [2024-07-26 14:30:45.562495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.595178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.595244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.049 [2024-07-26 14:30:45.595260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.576 ms 00:25:26.049 [2024-07-26 14:30:45.595271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.595330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.595346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.049 [2024-07-26 14:30:45.595357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:26.049 [2024-07-26 14:30:45.595372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.595796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.595834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.049 [2024-07-26 14:30:45.595849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:25:26.049 [2024-07-26 14:30:45.595859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.596081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.596104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.049 [2024-07-26 14:30:45.596117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:26.049 [2024-07-26 14:30:45.596133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.610388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.610441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.049 [2024-07-26 14:30:45.610461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.226 ms 00:25:26.049 [2024-07-26 14:30:45.610471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.624489] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:26.049 [2024-07-26 14:30:45.624542] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:26.049 [2024-07-26 14:30:45.624558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.624570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:26.049 [2024-07-26 14:30:45.624581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.974 ms 00:25:26.049 [2024-07-26 14:30:45.624591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.651111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.651167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:26.049 [2024-07-26 14:30:45.651182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.462 ms 00:25:26.049 [2024-07-26 14:30:45.651192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.664882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.664958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:26.049 [2024-07-26 14:30:45.664974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.646 ms 00:25:26.049 [2024-07-26 14:30:45.664984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.678164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.678216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:26.049 [2024-07-26 14:30:45.678231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.140 ms 00:25:26.049 [2024-07-26 14:30:45.678240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.679021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.679067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:26.049 [2024-07-26 14:30:45.679104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:25:26.049 [2024-07-26 14:30:45.679118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.753131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.753215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:26.049 [2024-07-26 14:30:45.753243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.988 ms 00:25:26.049 [2024-07-26 14:30:45.753284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.765785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:26.049 [2024-07-26 14:30:45.768501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.768550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:26.049 [2024-07-26 14:30:45.768565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.142 ms 00:25:26.049 [2024-07-26 14:30:45.768575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.768677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.768696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:26.049 [2024-07-26 14:30:45.768708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:26.049 [2024-07-26 14:30:45.768722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.049 [2024-07-26 14:30:45.770167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.049 [2024-07-26 14:30:45.770218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:26.049 [2024-07-26 14:30:45.770234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:25:26.049 [2024-07-26 14:30:45.770246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.050 [2024-07-26 14:30:45.770314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.050 [2024-07-26 14:30:45.770358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:26.050 [2024-07-26 14:30:45.770369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:25:26.050 [2024-07-26 14:30:45.770379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.050 [2024-07-26 14:30:45.770429] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:26.050 [2024-07-26 14:30:45.770447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.050 [2024-07-26 14:30:45.770456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:26.050 [2024-07-26 14:30:45.770466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:26.050 [2024-07-26 14:30:45.770475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.050 [2024-07-26 14:30:45.800141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.050 [2024-07-26 14:30:45.800199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:26.050 [2024-07-26 14:30:45.800224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.641 ms 00:25:26.050 [2024-07-26 14:30:45.800239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.050 [2024-07-26 14:30:45.800318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.050 [2024-07-26 14:30:45.800351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:26.050 [2024-07-26 14:30:45.800378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:26.050 [2024-07-26 14:30:45.800389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.050 [2024-07-26 14:30:45.806551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.368 ms, result 0 00:26:06.468  Copying: 1044/1048576 [kB] (1044 kBps) Copying: 2448/1048576 [kB] (1404 kBps) Copying: 9968/1048576 [kB] (7520 kBps) Copying: 36/1024 [MB] (26 MBps) Copying: 63/1024 [MB] (26 MBps) Copying: 90/1024 [MB] (27 MBps) Copying: 117/1024 [MB] (27 MBps) Copying: 144/1024 [MB] (26 MBps) Copying: 171/1024 [MB] (26 MBps) Copying: 198/1024 [MB] (26 MBps) Copying: 225/1024 [MB] (27 MBps) Copying: 253/1024 [MB] (27 MBps) Copying: 281/1024 [MB] (27 MBps) Copying: 308/1024 [MB] (27 MBps) Copying: 336/1024 [MB] (27 MBps) Copying: 363/1024 [MB] (27 MBps) Copying: 390/1024 [MB] (27 MBps) Copying: 417/1024 [MB] (27 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 472/1024 [MB] (27 MBps) Copying: 500/1024 [MB] (27 MBps) Copying: 527/1024 [MB] (27 MBps) Copying: 555/1024 [MB] (27 MBps) Copying: 583/1024 [MB] (28 MBps) Copying: 612/1024 [MB] (28 MBps) Copying: 640/1024 [MB] (28 MBps) Copying: 669/1024 [MB] (28 MBps) Copying: 696/1024 [MB] (27 MBps) Copying: 723/1024 [MB] (27 MBps) Copying: 750/1024 [MB] (27 MBps) Copying: 778/1024 [MB] (27 MBps) Copying: 805/1024 [MB] (27 MBps) Copying: 833/1024 [MB] (28 MBps) Copying: 861/1024 [MB] (27 MBps) Copying: 888/1024 [MB] (27 MBps) Copying: 915/1024 [MB] (26 MBps) Copying: 942/1024 [MB] (27 MBps) Copying: 970/1024 [MB] (27 MBps) Copying: 997/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-26 14:31:26.228844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.468 [2024-07-26 14:31:26.228995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:06.468 [2024-07-26 14:31:26.229022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:06.468 [2024-07-26 14:31:26.229036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.468 [2024-07-26 14:31:26.229074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:06.746 [2024-07-26 14:31:26.232557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.232591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:06.746 [2024-07-26 14:31:26.232607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.459 ms 00:26:06.746 [2024-07-26 14:31:26.232618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.232861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.232880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:06.746 [2024-07-26 14:31:26.232914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:26:06.746 [2024-07-26 14:31:26.232928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.247314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.247402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:06.746 [2024-07-26 14:31:26.247438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.360 ms 00:26:06.746 [2024-07-26 14:31:26.247450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.254114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.254178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:06.746 [2024-07-26 14:31:26.254216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.621 ms 00:26:06.746 [2024-07-26 14:31:26.254227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.284525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.284577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:06.746 [2024-07-26 14:31:26.284609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.222 ms 00:26:06.746 [2024-07-26 14:31:26.284620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.301453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.301506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:06.746 [2024-07-26 14:31:26.301538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.790 ms 00:26:06.746 [2024-07-26 14:31:26.301549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.305282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.305341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:06.746 [2024-07-26 14:31:26.305374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.687 ms 00:26:06.746 [2024-07-26 14:31:26.305386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.333233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.333285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:06.746 [2024-07-26 14:31:26.333331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.817 ms 00:26:06.746 [2024-07-26 14:31:26.333341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.361604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.361658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:06.746 [2024-07-26 14:31:26.361688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.221 ms 00:26:06.746 [2024-07-26 14:31:26.361699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.392332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.392399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:06.746 [2024-07-26 14:31:26.392430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.576 ms 00:26:06.746 [2024-07-26 14:31:26.392469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.419681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.746 [2024-07-26 14:31:26.419733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:06.746 [2024-07-26 14:31:26.419763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.129 ms 00:26:06.746 [2024-07-26 14:31:26.419773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.746 [2024-07-26 14:31:26.419824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:06.746 [2024-07-26 14:31:26.419847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:06.746 [2024-07-26 14:31:26.419861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:06.746 [2024-07-26 14:31:26.419872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.419989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.420040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.420054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:06.746 [2024-07-26 14:31:26.420066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.420978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:06.747 [2024-07-26 14:31:26.421178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:06.747 [2024-07-26 14:31:26.421191] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e7712e9-1e22-48e6-a108-4ae12c2113c1 00:26:06.747 [2024-07-26 14:31:26.421209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:06.748 [2024-07-26 14:31:26.421221] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 181440 00:26:06.748 [2024-07-26 14:31:26.421231] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 179456 00:26:06.748 [2024-07-26 14:31:26.421248] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0111 00:26:06.748 [2024-07-26 14:31:26.421259] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:06.748 [2024-07-26 14:31:26.421271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:06.748 [2024-07-26 14:31:26.421282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:06.748 [2024-07-26 14:31:26.421292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:06.748 [2024-07-26 14:31:26.421302] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:06.748 [2024-07-26 14:31:26.421313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.748 [2024-07-26 14:31:26.421324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:06.748 [2024-07-26 14:31:26.421336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.493 ms 00:26:06.748 [2024-07-26 14:31:26.421347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.435585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.748 [2024-07-26 14:31:26.435641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:06.748 [2024-07-26 14:31:26.435672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.194 ms 00:26:06.748 [2024-07-26 14:31:26.435694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.436220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:06.748 [2024-07-26 14:31:26.436283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:06.748 [2024-07-26 14:31:26.436299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:26:06.748 [2024-07-26 14:31:26.436325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.467298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.748 [2024-07-26 14:31:26.467358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:06.748 [2024-07-26 14:31:26.467388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.748 [2024-07-26 14:31:26.467399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.467457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.748 [2024-07-26 14:31:26.467471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:06.748 [2024-07-26 14:31:26.467482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.748 [2024-07-26 14:31:26.467492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.467609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.748 [2024-07-26 14:31:26.467643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:06.748 [2024-07-26 14:31:26.467655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.748 [2024-07-26 14:31:26.467672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:06.748 [2024-07-26 14:31:26.467694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:06.748 [2024-07-26 14:31:26.467708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:06.748 [2024-07-26 14:31:26.467719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:06.748 [2024-07-26 14:31:26.467730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.555794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.555876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:07.018 [2024-07-26 14:31:26.555917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.555930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.626605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.626677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:07.018 [2024-07-26 14:31:26.626709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.626720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.626819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.626836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.018 [2024-07-26 14:31:26.626851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.626861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.626918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.627008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.018 [2024-07-26 14:31:26.627022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.627033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.627150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.627169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.018 [2024-07-26 14:31:26.627187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.627199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.627249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.627266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:07.018 [2024-07-26 14:31:26.627279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.627302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.627384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.627417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.018 [2024-07-26 14:31:26.627429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.627447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.627512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:07.018 [2024-07-26 14:31:26.627533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.018 [2024-07-26 14:31:26.627545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:07.018 [2024-07-26 14:31:26.627556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.018 [2024-07-26 14:31:26.627696] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 398.820 ms, result 0 00:26:07.955 00:26:07.955 00:26:07.955 14:31:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:09.859 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:09.859 14:31:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:09.859 [2024-07-26 14:31:29.561584] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:09.859 [2024-07-26 14:31:29.561793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83720 ] 00:26:10.118 [2024-07-26 14:31:29.737541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.378 [2024-07-26 14:31:29.950479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.638 [2024-07-26 14:31:30.227700] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.638 [2024-07-26 14:31:30.227859] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:10.639 [2024-07-26 14:31:30.387691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.639 [2024-07-26 14:31:30.387771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.639 [2024-07-26 14:31:30.387822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:10.639 [2024-07-26 14:31:30.387833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.639 [2024-07-26 14:31:30.387898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.639 [2024-07-26 14:31:30.387917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.639 [2024-07-26 14:31:30.387947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:10.639 [2024-07-26 14:31:30.387964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.639 [2024-07-26 14:31:30.388000] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.639 [2024-07-26 14:31:30.389094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.639 [2024-07-26 14:31:30.389167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.639 [2024-07-26 14:31:30.389198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.639 [2024-07-26 14:31:30.389210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.179 ms 00:26:10.639 [2024-07-26 14:31:30.389221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.639 [2024-07-26 14:31:30.390477] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.905 [2024-07-26 14:31:30.406491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.406534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.905 [2024-07-26 14:31:30.406567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.016 ms 00:26:10.905 [2024-07-26 14:31:30.406590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.406660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.406682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.905 [2024-07-26 14:31:30.406694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:10.905 [2024-07-26 14:31:30.406705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.411166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.411221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.905 [2024-07-26 14:31:30.411252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.335 ms 00:26:10.905 [2024-07-26 14:31:30.411262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.411358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.411377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.905 [2024-07-26 14:31:30.411389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:26:10.905 [2024-07-26 14:31:30.411399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.411493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.411527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.905 [2024-07-26 14:31:30.411539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:10.905 [2024-07-26 14:31:30.411550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.411585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.905 [2024-07-26 14:31:30.415461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.415510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.905 [2024-07-26 14:31:30.415541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.885 ms 00:26:10.905 [2024-07-26 14:31:30.415551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.415601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.415618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.905 [2024-07-26 14:31:30.415629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:10.905 [2024-07-26 14:31:30.415640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.415683] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.905 [2024-07-26 14:31:30.415713] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.905 [2024-07-26 14:31:30.415769] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.905 [2024-07-26 14:31:30.415808] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:10.905 [2024-07-26 14:31:30.415911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.905 [2024-07-26 14:31:30.415927] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.905 [2024-07-26 14:31:30.415956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:10.905 [2024-07-26 14:31:30.415974] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.905 [2024-07-26 14:31:30.415988] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416000] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.905 [2024-07-26 14:31:30.416011] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.905 [2024-07-26 14:31:30.416049] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.905 [2024-07-26 14:31:30.416062] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.905 [2024-07-26 14:31:30.416075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.416092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.905 [2024-07-26 14:31:30.416106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:26:10.905 [2024-07-26 14:31:30.416117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.416208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.905 [2024-07-26 14:31:30.416223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.905 [2024-07-26 14:31:30.416235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:10.905 [2024-07-26 14:31:30.416250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.905 [2024-07-26 14:31:30.416373] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.905 [2024-07-26 14:31:30.416391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.905 [2024-07-26 14:31:30.416409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416421] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.905 [2024-07-26 14:31:30.416443] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416464] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.905 [2024-07-26 14:31:30.416474] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.905 [2024-07-26 14:31:30.416495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.905 [2024-07-26 14:31:30.416506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.905 [2024-07-26 14:31:30.416515] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.905 [2024-07-26 14:31:30.416526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.905 [2024-07-26 14:31:30.416536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.905 [2024-07-26 14:31:30.416547] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.905 [2024-07-26 14:31:30.416568] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416589] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.905 [2024-07-26 14:31:30.416613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.905 [2024-07-26 14:31:30.416644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.905 [2024-07-26 14:31:30.416676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.905 [2024-07-26 14:31:30.416686] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.905 [2024-07-26 14:31:30.416695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.905 [2024-07-26 14:31:30.416706] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.906 [2024-07-26 14:31:30.416716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.906 [2024-07-26 14:31:30.416727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.906 [2024-07-26 14:31:30.416737] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.906 [2024-07-26 14:31:30.416747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.906 [2024-07-26 14:31:30.416757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.906 [2024-07-26 14:31:30.416768] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.906 [2024-07-26 14:31:30.416779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.906 [2024-07-26 14:31:30.416789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.906 [2024-07-26 14:31:30.416799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.906 [2024-07-26 14:31:30.416809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.906 [2024-07-26 14:31:30.416820] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.906 [2024-07-26 14:31:30.416830] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.906 [2024-07-26 14:31:30.416840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.906 [2024-07-26 14:31:30.416850] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.906 [2024-07-26 14:31:30.416861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.906 [2024-07-26 14:31:30.416873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.906 [2024-07-26 14:31:30.416884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.906 [2024-07-26 14:31:30.416896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.906 [2024-07-26 14:31:30.416909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.906 [2024-07-26 14:31:30.416920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.906 [2024-07-26 14:31:30.416947] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.906 [2024-07-26 14:31:30.416959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.906 [2024-07-26 14:31:30.416971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.906 [2024-07-26 14:31:30.416983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.906 [2024-07-26 14:31:30.416997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.906 [2024-07-26 14:31:30.417022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.906 [2024-07-26 14:31:30.417034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.906 [2024-07-26 14:31:30.417045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.906 [2024-07-26 14:31:30.417056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.906 [2024-07-26 14:31:30.417067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.906 [2024-07-26 14:31:30.417079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.906 [2024-07-26 14:31:30.417090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.906 [2024-07-26 14:31:30.417101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.906 [2024-07-26 14:31:30.417113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.906 [2024-07-26 14:31:30.417170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.906 [2024-07-26 14:31:30.417182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417200] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.906 [2024-07-26 14:31:30.417212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.906 [2024-07-26 14:31:30.417224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.906 [2024-07-26 14:31:30.417235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.906 [2024-07-26 14:31:30.417248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.417260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.906 [2024-07-26 14:31:30.417273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:26:10.906 [2024-07-26 14:31:30.417284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.453216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.453290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.906 [2024-07-26 14:31:30.453326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.845 ms 00:26:10.906 [2024-07-26 14:31:30.453337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.453455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.453471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.906 [2024-07-26 14:31:30.453483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:10.906 [2024-07-26 14:31:30.453494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.487208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.487264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.906 [2024-07-26 14:31:30.487298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.585 ms 00:26:10.906 [2024-07-26 14:31:30.487311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.487378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.487393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.906 [2024-07-26 14:31:30.487406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.906 [2024-07-26 14:31:30.487433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.487871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.487921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.906 [2024-07-26 14:31:30.487939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:26:10.906 [2024-07-26 14:31:30.487951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.488140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.488161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.906 [2024-07-26 14:31:30.488174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:26:10.906 [2024-07-26 14:31:30.488186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.502996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.503033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.906 [2024-07-26 14:31:30.503064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.775 ms 00:26:10.906 [2024-07-26 14:31:30.503080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.518353] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:10.906 [2024-07-26 14:31:30.518408] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.906 [2024-07-26 14:31:30.518441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.518453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.906 [2024-07-26 14:31:30.518465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.184 ms 00:26:10.906 [2024-07-26 14:31:30.518476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.547704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.547778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.906 [2024-07-26 14:31:30.547810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.183 ms 00:26:10.906 [2024-07-26 14:31:30.547821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.562225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.562279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.906 [2024-07-26 14:31:30.562310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.351 ms 00:26:10.906 [2024-07-26 14:31:30.562320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.577062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.577098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.906 [2024-07-26 14:31:30.577127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.622 ms 00:26:10.906 [2024-07-26 14:31:30.577137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.906 [2024-07-26 14:31:30.578080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.906 [2024-07-26 14:31:30.578132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.906 [2024-07-26 14:31:30.578149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:26:10.906 [2024-07-26 14:31:30.578160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.641138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.641196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:10.907 [2024-07-26 14:31:30.641229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.932 ms 00:26:10.907 [2024-07-26 14:31:30.641246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.652631] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.907 [2024-07-26 14:31:30.655229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.655261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.907 [2024-07-26 14:31:30.655291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.845 ms 00:26:10.907 [2024-07-26 14:31:30.655302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.655454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.655474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.907 [2024-07-26 14:31:30.655488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:10.907 [2024-07-26 14:31:30.655499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.656200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.656236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.907 [2024-07-26 14:31:30.656252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:26:10.907 [2024-07-26 14:31:30.656264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.656346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.656377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.907 [2024-07-26 14:31:30.656389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:10.907 [2024-07-26 14:31:30.656400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.907 [2024-07-26 14:31:30.656440] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.907 [2024-07-26 14:31:30.656456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.907 [2024-07-26 14:31:30.656471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.907 [2024-07-26 14:31:30.656483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:10.907 [2024-07-26 14:31:30.656494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.165 [2024-07-26 14:31:30.684790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.165 [2024-07-26 14:31:30.684827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:11.165 [2024-07-26 14:31:30.684859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.269 ms 00:26:11.165 [2024-07-26 14:31:30.684876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.165 [2024-07-26 14:31:30.685050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:11.165 [2024-07-26 14:31:30.685074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:11.165 [2024-07-26 14:31:30.685087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:26:11.165 [2024-07-26 14:31:30.685098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:11.165 [2024-07-26 14:31:30.686327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.073 ms, result 0 00:26:55.138  Copying: 25/1024 [MB] (25 MBps) Copying: 48/1024 [MB] (23 MBps) Copying: 72/1024 [MB] (23 MBps) Copying: 96/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (23 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (23 MBps) Copying: 190/1024 [MB] (23 MBps) Copying: 213/1024 [MB] (23 MBps) Copying: 237/1024 [MB] (23 MBps) Copying: 260/1024 [MB] (23 MBps) Copying: 283/1024 [MB] (23 MBps) Copying: 306/1024 [MB] (22 MBps) Copying: 329/1024 [MB] (23 MBps) Copying: 353/1024 [MB] (23 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 400/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (22 MBps) Copying: 446/1024 [MB] (23 MBps) Copying: 470/1024 [MB] (23 MBps) Copying: 494/1024 [MB] (23 MBps) Copying: 518/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (24 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 588/1024 [MB] (22 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 658/1024 [MB] (22 MBps) Copying: 683/1024 [MB] (24 MBps) Copying: 706/1024 [MB] (23 MBps) Copying: 729/1024 [MB] (23 MBps) Copying: 753/1024 [MB] (23 MBps) Copying: 776/1024 [MB] (23 MBps) Copying: 800/1024 [MB] (23 MBps) Copying: 823/1024 [MB] (23 MBps) Copying: 845/1024 [MB] (22 MBps) Copying: 869/1024 [MB] (23 MBps) Copying: 892/1024 [MB] (23 MBps) Copying: 916/1024 [MB] (23 MBps) Copying: 939/1024 [MB] (23 MBps) Copying: 962/1024 [MB] (23 MBps) Copying: 986/1024 [MB] (23 MBps) Copying: 1009/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:32:14.720597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.720693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:55.138 [2024-07-26 14:32:14.720729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:55.138 [2024-07-26 14:32:14.720749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.720825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:55.138 [2024-07-26 14:32:14.727870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.727967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:55.138 [2024-07-26 14:32:14.728005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.996 ms 00:26:55.138 [2024-07-26 14:32:14.728059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.728597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.728680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:55.138 [2024-07-26 14:32:14.728709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:26:55.138 [2024-07-26 14:32:14.728731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.732029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.732097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:55.138 [2024-07-26 14:32:14.732112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.253 ms 00:26:55.138 [2024-07-26 14:32:14.732123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.738134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.738185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:55.138 [2024-07-26 14:32:14.738198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.979 ms 00:26:55.138 [2024-07-26 14:32:14.738209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.768643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.768692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:55.138 [2024-07-26 14:32:14.768710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.337 ms 00:26:55.138 [2024-07-26 14:32:14.768723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.787144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.787207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:55.138 [2024-07-26 14:32:14.787231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.372 ms 00:26:55.138 [2024-07-26 14:32:14.787243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.790982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.791055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:55.138 [2024-07-26 14:32:14.791095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:26:55.138 [2024-07-26 14:32:14.791107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.820845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.820910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:55.138 [2024-07-26 14:32:14.820928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.715 ms 00:26:55.138 [2024-07-26 14:32:14.820939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.850389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.850450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:55.138 [2024-07-26 14:32:14.850467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.405 ms 00:26:55.138 [2024-07-26 14:32:14.850478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.138 [2024-07-26 14:32:14.882032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.138 [2024-07-26 14:32:14.882089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:55.138 [2024-07-26 14:32:14.882120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.509 ms 00:26:55.138 [2024-07-26 14:32:14.882131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.396 [2024-07-26 14:32:14.911319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.396 [2024-07-26 14:32:14.911377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:55.396 [2024-07-26 14:32:14.911394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.100 ms 00:26:55.396 [2024-07-26 14:32:14.911405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.396 [2024-07-26 14:32:14.911447] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:55.396 [2024-07-26 14:32:14.911489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:55.396 [2024-07-26 14:32:14.911511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:26:55.396 [2024-07-26 14:32:14.911524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:55.396 [2024-07-26 14:32:14.911666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.911975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.912976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:55.397 [2024-07-26 14:32:14.913569] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:55.398 [2024-07-26 14:32:14.913587] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1e7712e9-1e22-48e6-a108-4ae12c2113c1 00:26:55.398 [2024-07-26 14:32:14.913610] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:26:55.398 [2024-07-26 14:32:14.913628] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:55.398 [2024-07-26 14:32:14.913650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:55.398 [2024-07-26 14:32:14.913670] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:55.398 [2024-07-26 14:32:14.913690] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:55.398 [2024-07-26 14:32:14.913703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:55.398 [2024-07-26 14:32:14.913720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:55.398 [2024-07-26 14:32:14.913739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:55.398 [2024-07-26 14:32:14.913756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:55.398 [2024-07-26 14:32:14.913776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.398 [2024-07-26 14:32:14.913791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:55.398 [2024-07-26 14:32:14.913809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.329 ms 00:26:55.398 [2024-07-26 14:32:14.913821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.929858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.398 [2024-07-26 14:32:14.929934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:55.398 [2024-07-26 14:32:14.929983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.976 ms 00:26:55.398 [2024-07-26 14:32:14.929995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.930591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:55.398 [2024-07-26 14:32:14.930639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:55.398 [2024-07-26 14:32:14.930670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:26:55.398 [2024-07-26 14:32:14.930687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.962328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:14.962385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:55.398 [2024-07-26 14:32:14.962418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:14.962428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.962487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:14.962501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:55.398 [2024-07-26 14:32:14.962511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:14.962526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.962599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:14.962648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:55.398 [2024-07-26 14:32:14.962666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:14.962686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:14.962722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:14.962743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:55.398 [2024-07-26 14:32:14.962762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:14.962773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.048263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.048332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:55.398 [2024-07-26 14:32:15.048383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.048394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.119482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.119560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:55.398 [2024-07-26 14:32:15.119594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.119611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.119709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.119725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:55.398 [2024-07-26 14:32:15.119737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.119746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.119789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.119803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:55.398 [2024-07-26 14:32:15.119813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.119823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.120071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.120104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:55.398 [2024-07-26 14:32:15.120123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.120135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.120237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.120280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:55.398 [2024-07-26 14:32:15.120306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.120327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.120449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.120468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:55.398 [2024-07-26 14:32:15.120488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.120507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.120583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:55.398 [2024-07-26 14:32:15.120612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:55.398 [2024-07-26 14:32:15.120634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:55.398 [2024-07-26 14:32:15.120652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:55.398 [2024-07-26 14:32:15.120942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 400.249 ms, result 0 00:26:56.333 00:26:56.333 00:26:56.333 14:32:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.235 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:58.235 14:32:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:58.235 14:32:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:58.235 14:32:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:58.235 14:32:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81774 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 81774 ']' 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 81774 00:26:58.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (81774) - No such process 00:26:58.494 Process with pid 81774 is not found 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 81774 is not found' 00:26:58.494 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:59.062 Remove shared memory files 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:59.062 00:26:59.062 real 3m59.555s 00:26:59.062 user 4m38.821s 00:26:59.062 sys 0m35.602s 00:26:59.062 ************************************ 00:26:59.062 END TEST ftl_dirty_shutdown 00:26:59.062 ************************************ 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:59.062 14:32:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.062 14:32:18 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.062 14:32:18 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:26:59.062 14:32:18 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:59.062 14:32:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:59.062 ************************************ 00:26:59.062 START TEST ftl_upgrade_shutdown 00:26:59.062 ************************************ 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:59.063 * Looking for test storage... 00:26:59.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84263 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84263 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84263 ']' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.063 14:32:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:59.063 [2024-07-26 14:32:18.807561] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:59.063 [2024-07-26 14:32:18.807723] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84263 ] 00:26:59.323 [2024-07-26 14:32:18.979713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.582 [2024-07-26 14:32:19.174777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:00.150 14:32:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:00.409 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:00.668 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:00.668 { 00:27:00.668 "name": "basen1", 00:27:00.668 "aliases": [ 00:27:00.668 "52d782da-eee4-4d7c-b21b-3bd068d84d54" 00:27:00.668 ], 00:27:00.668 "product_name": "NVMe disk", 00:27:00.668 "block_size": 4096, 00:27:00.668 "num_blocks": 1310720, 00:27:00.668 "uuid": "52d782da-eee4-4d7c-b21b-3bd068d84d54", 00:27:00.668 "assigned_rate_limits": { 00:27:00.668 "rw_ios_per_sec": 0, 00:27:00.668 "rw_mbytes_per_sec": 0, 00:27:00.668 "r_mbytes_per_sec": 0, 00:27:00.668 "w_mbytes_per_sec": 0 00:27:00.668 }, 00:27:00.668 "claimed": true, 00:27:00.668 "claim_type": "read_many_write_one", 00:27:00.668 "zoned": false, 00:27:00.668 "supported_io_types": { 00:27:00.668 "read": true, 00:27:00.668 "write": true, 00:27:00.668 "unmap": true, 00:27:00.668 "flush": true, 00:27:00.668 "reset": true, 00:27:00.668 "nvme_admin": true, 00:27:00.668 "nvme_io": true, 00:27:00.668 "nvme_io_md": false, 00:27:00.668 "write_zeroes": true, 00:27:00.668 "zcopy": false, 00:27:00.668 "get_zone_info": false, 00:27:00.668 "zone_management": false, 00:27:00.668 "zone_append": false, 00:27:00.668 "compare": true, 00:27:00.668 "compare_and_write": false, 00:27:00.668 "abort": true, 00:27:00.668 "seek_hole": false, 00:27:00.668 "seek_data": false, 00:27:00.668 "copy": true, 00:27:00.668 "nvme_iov_md": false 00:27:00.668 }, 00:27:00.668 "driver_specific": { 00:27:00.668 "nvme": [ 00:27:00.668 { 00:27:00.668 "pci_address": "0000:00:11.0", 00:27:00.668 "trid": { 00:27:00.668 "trtype": "PCIe", 00:27:00.668 "traddr": "0000:00:11.0" 00:27:00.668 }, 00:27:00.668 "ctrlr_data": { 00:27:00.668 "cntlid": 0, 00:27:00.668 "vendor_id": "0x1b36", 00:27:00.668 "model_number": "QEMU NVMe Ctrl", 00:27:00.668 "serial_number": "12341", 00:27:00.668 "firmware_revision": "8.0.0", 00:27:00.668 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:00.669 "oacs": { 00:27:00.669 "security": 0, 00:27:00.669 "format": 1, 00:27:00.669 "firmware": 0, 00:27:00.669 "ns_manage": 1 00:27:00.669 }, 00:27:00.669 "multi_ctrlr": false, 00:27:00.669 "ana_reporting": false 00:27:00.669 }, 00:27:00.669 "vs": { 00:27:00.669 "nvme_version": "1.4" 00:27:00.669 }, 00:27:00.669 "ns_data": { 00:27:00.669 "id": 1, 00:27:00.669 "can_share": false 00:27:00.669 } 00:27:00.669 } 00:27:00.669 ], 00:27:00.669 "mp_policy": "active_passive" 00:27:00.669 } 00:27:00.669 } 00:27:00.669 ]' 00:27:00.669 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:00.927 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:01.186 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5aece5a9-3cf6-4732-8124-40cb1c7df842 00:27:01.186 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:01.186 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5aece5a9-3cf6-4732-8124-40cb1c7df842 00:27:01.186 14:32:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:01.444 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=b72a5aed-b814-4c42-af09-0f12f9fed4c4 00:27:01.444 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u b72a5aed-b814-4c42-af09-0f12f9fed4c4 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=4dc44816-6369-4678-bcb9-988e3b896c23 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 4dc44816-6369-4678-bcb9-988e3b896c23 ]] 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4dc44816-6369-4678-bcb9-988e3b896c23 5120 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=4dc44816-6369-4678-bcb9-988e3b896c23 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4dc44816-6369-4678-bcb9-988e3b896c23 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4dc44816-6369-4678-bcb9-988e3b896c23 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:27:01.703 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4dc44816-6369-4678-bcb9-988e3b896c23 00:27:01.962 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:01.962 { 00:27:01.963 "name": "4dc44816-6369-4678-bcb9-988e3b896c23", 00:27:01.963 "aliases": [ 00:27:01.963 "lvs/basen1p0" 00:27:01.963 ], 00:27:01.963 "product_name": "Logical Volume", 00:27:01.963 "block_size": 4096, 00:27:01.963 "num_blocks": 5242880, 00:27:01.963 "uuid": "4dc44816-6369-4678-bcb9-988e3b896c23", 00:27:01.963 "assigned_rate_limits": { 00:27:01.963 "rw_ios_per_sec": 0, 00:27:01.963 "rw_mbytes_per_sec": 0, 00:27:01.963 "r_mbytes_per_sec": 0, 00:27:01.963 "w_mbytes_per_sec": 0 00:27:01.963 }, 00:27:01.963 "claimed": false, 00:27:01.963 "zoned": false, 00:27:01.963 "supported_io_types": { 00:27:01.963 "read": true, 00:27:01.963 "write": true, 00:27:01.963 "unmap": true, 00:27:01.963 "flush": false, 00:27:01.963 "reset": true, 00:27:01.963 "nvme_admin": false, 00:27:01.963 "nvme_io": false, 00:27:01.963 "nvme_io_md": false, 00:27:01.963 "write_zeroes": true, 00:27:01.963 "zcopy": false, 00:27:01.963 "get_zone_info": false, 00:27:01.963 "zone_management": false, 00:27:01.963 "zone_append": false, 00:27:01.963 "compare": false, 00:27:01.963 "compare_and_write": false, 00:27:01.963 "abort": false, 00:27:01.963 "seek_hole": true, 00:27:01.963 "seek_data": true, 00:27:01.963 "copy": false, 00:27:01.963 "nvme_iov_md": false 00:27:01.963 }, 00:27:01.963 "driver_specific": { 00:27:01.963 "lvol": { 00:27:01.963 "lvol_store_uuid": "b72a5aed-b814-4c42-af09-0f12f9fed4c4", 00:27:01.963 "base_bdev": "basen1", 00:27:01.963 "thin_provision": true, 00:27:01.963 "num_allocated_clusters": 0, 00:27:01.963 "snapshot": false, 00:27:01.963 "clone": false, 00:27:01.963 "esnap_clone": false 00:27:01.963 } 00:27:01.963 } 00:27:01.963 } 00:27:01.963 ]' 00:27:01.963 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:01.963 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:27:01.963 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:02.221 14:32:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:02.480 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:02.480 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:02.480 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:02.738 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:02.738 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:02.739 14:32:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4dc44816-6369-4678-bcb9-988e3b896c23 -c cachen1p0 --l2p_dram_limit 2 00:27:02.998 [2024-07-26 14:32:22.521300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.998 [2024-07-26 14:32:22.521376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:02.998 [2024-07-26 14:32:22.521412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:02.998 [2024-07-26 14:32:22.521426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.998 [2024-07-26 14:32:22.521518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.521557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:02.999 [2024-07-26 14:32:22.521571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:27:02.999 [2024-07-26 14:32:22.521584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.521613] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:02.999 [2024-07-26 14:32:22.522603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:02.999 [2024-07-26 14:32:22.522654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.522674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:02.999 [2024-07-26 14:32:22.522687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.048 ms 00:27:02.999 [2024-07-26 14:32:22.522700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.522843] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 9de8945d-d487-4658-a5e5-cd03fde449c9 00:27:02.999 [2024-07-26 14:32:22.523977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.524008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:02.999 [2024-07-26 14:32:22.524055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:02.999 [2024-07-26 14:32:22.524070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.528701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.528746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:02.999 [2024-07-26 14:32:22.528781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.543 ms 00:27:02.999 [2024-07-26 14:32:22.528791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.528854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.528870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:02.999 [2024-07-26 14:32:22.528884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:02.999 [2024-07-26 14:32:22.528895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.528998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.529016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:02.999 [2024-07-26 14:32:22.529033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:02.999 [2024-07-26 14:32:22.529044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.529094] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:02.999 [2024-07-26 14:32:22.533584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.533625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:02.999 [2024-07-26 14:32:22.533658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.518 ms 00:27:02.999 [2024-07-26 14:32:22.533671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.533707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.533725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:02.999 [2024-07-26 14:32:22.533738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:02.999 [2024-07-26 14:32:22.533750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.533807] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:02.999 [2024-07-26 14:32:22.534056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:02.999 [2024-07-26 14:32:22.534082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:02.999 [2024-07-26 14:32:22.534103] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:02.999 [2024-07-26 14:32:22.534118] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534133] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534145] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:02.999 [2024-07-26 14:32:22.534162] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:02.999 [2024-07-26 14:32:22.534172] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:02.999 [2024-07-26 14:32:22.534184] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:02.999 [2024-07-26 14:32:22.534197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.534209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:02.999 [2024-07-26 14:32:22.534220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:27:02.999 [2024-07-26 14:32:22.534233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.534337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:02.999 [2024-07-26 14:32:22.534353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:02.999 [2024-07-26 14:32:22.534364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:02.999 [2024-07-26 14:32:22.534378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:02.999 [2024-07-26 14:32:22.534490] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:02.999 [2024-07-26 14:32:22.534513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:02.999 [2024-07-26 14:32:22.534526] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534538] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534549] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:02.999 [2024-07-26 14:32:22.534578] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:02.999 [2024-07-26 14:32:22.534613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:02.999 [2024-07-26 14:32:22.534623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:02.999 [2024-07-26 14:32:22.534637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:02.999 [2024-07-26 14:32:22.534659] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:02.999 [2024-07-26 14:32:22.534669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:02.999 [2024-07-26 14:32:22.534691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:02.999 [2024-07-26 14:32:22.534702] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:02.999 [2024-07-26 14:32:22.534726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:02.999 [2024-07-26 14:32:22.534735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534748] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:02.999 [2024-07-26 14:32:22.534758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:02.999 [2024-07-26 14:32:22.534769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:02.999 [2024-07-26 14:32:22.534791] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:02.999 [2024-07-26 14:32:22.534801] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:02.999 [2024-07-26 14:32:22.534822] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:02.999 [2024-07-26 14:32:22.534833] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:02.999 [2024-07-26 14:32:22.534855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:02.999 [2024-07-26 14:32:22.534864] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:02.999 [2024-07-26 14:32:22.534876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:02.999 [2024-07-26 14:32:22.534886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:02.999 [2024-07-26 14:32:22.534899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.534909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:02.999 [2024-07-26 14:32:22.534922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:02.999 [2024-07-26 14:32:22.535215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.535268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:02.999 [2024-07-26 14:32:22.535308] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:02.999 [2024-07-26 14:32:22.535348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.535475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:02.999 [2024-07-26 14:32:22.535523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:02.999 [2024-07-26 14:32:22.535586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.535753] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:02.999 [2024-07-26 14:32:22.535890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:02.999 [2024-07-26 14:32:22.536048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:02.999 [2024-07-26 14:32:22.536176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:02.999 [2024-07-26 14:32:22.536234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:02.999 [2024-07-26 14:32:22.536303] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:02.999 [2024-07-26 14:32:22.536435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:02.999 [2024-07-26 14:32:22.536565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:02.999 [2024-07-26 14:32:22.536674] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:03.000 [2024-07-26 14:32:22.536782] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:03.000 [2024-07-26 14:32:22.536840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:03.000 [2024-07-26 14:32:22.536993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:03.000 [2024-07-26 14:32:22.537032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:03.000 [2024-07-26 14:32:22.537072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:03.000 [2024-07-26 14:32:22.537084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:03.000 [2024-07-26 14:32:22.537097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:03.000 [2024-07-26 14:32:22.537108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:03.000 [2024-07-26 14:32:22.537201] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:03.000 [2024-07-26 14:32:22.537214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:03.000 [2024-07-26 14:32:22.537254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:03.000 [2024-07-26 14:32:22.537282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:03.000 [2024-07-26 14:32:22.537293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:03.000 [2024-07-26 14:32:22.537307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:03.000 [2024-07-26 14:32:22.537319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:03.000 [2024-07-26 14:32:22.537333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.873 ms 00:27:03.000 [2024-07-26 14:32:22.537343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:03.000 [2024-07-26 14:32:22.537399] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:03.000 [2024-07-26 14:32:22.537416] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:06.284 [2024-07-26 14:32:25.646231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.646295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:06.284 [2024-07-26 14:32:25.646335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3108.843 ms 00:27:06.284 [2024-07-26 14:32:25.646347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.674549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.674619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:06.284 [2024-07-26 14:32:25.674656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.972 ms 00:27:06.284 [2024-07-26 14:32:25.674667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.674785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.674804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:06.284 [2024-07-26 14:32:25.674822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:06.284 [2024-07-26 14:32:25.674832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.710038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.710098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:06.284 [2024-07-26 14:32:25.710120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.153 ms 00:27:06.284 [2024-07-26 14:32:25.710131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.710189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.710203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:06.284 [2024-07-26 14:32:25.710220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:06.284 [2024-07-26 14:32:25.710229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.710602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.710619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:06.284 [2024-07-26 14:32:25.710633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.296 ms 00:27:06.284 [2024-07-26 14:32:25.710643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.710700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.710718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:06.284 [2024-07-26 14:32:25.710732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:06.284 [2024-07-26 14:32:25.710742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.726015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.726055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:06.284 [2024-07-26 14:32:25.726089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.247 ms 00:27:06.284 [2024-07-26 14:32:25.726100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.739442] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:06.284 [2024-07-26 14:32:25.740519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.740563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:06.284 [2024-07-26 14:32:25.740581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.306 ms 00:27:06.284 [2024-07-26 14:32:25.740596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.781562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.781640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:06.284 [2024-07-26 14:32:25.781661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.929 ms 00:27:06.284 [2024-07-26 14:32:25.781675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.781803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.781829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:06.284 [2024-07-26 14:32:25.781842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:27:06.284 [2024-07-26 14:32:25.781856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.808498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.808558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:06.284 [2024-07-26 14:32:25.808575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.528 ms 00:27:06.284 [2024-07-26 14:32:25.808591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.835290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.835348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:06.284 [2024-07-26 14:32:25.835364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.656 ms 00:27:06.284 [2024-07-26 14:32:25.835382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.836087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.284 [2024-07-26 14:32:25.836116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:06.284 [2024-07-26 14:32:25.836133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.661 ms 00:27:06.284 [2024-07-26 14:32:25.836146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.284 [2024-07-26 14:32:25.938273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:25.938354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:06.285 [2024-07-26 14:32:25.938374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 102.043 ms 00:27:06.285 [2024-07-26 14:32:25.938402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:25.965979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:25.966037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:06.285 [2024-07-26 14:32:25.966055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.527 ms 00:27:06.285 [2024-07-26 14:32:25.966067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:25.994845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:25.994976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:06.285 [2024-07-26 14:32:25.995020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.734 ms 00:27:06.285 [2024-07-26 14:32:25.995033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:26.022790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:26.022849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:06.285 [2024-07-26 14:32:26.022866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.713 ms 00:27:06.285 [2024-07-26 14:32:26.022879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:26.022964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:26.022985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:06.285 [2024-07-26 14:32:26.022998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:06.285 [2024-07-26 14:32:26.023012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:26.023112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.285 [2024-07-26 14:32:26.023135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:06.285 [2024-07-26 14:32:26.023147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:06.285 [2024-07-26 14:32:26.023158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.285 [2024-07-26 14:32:26.024272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3502.368 ms, result 0 00:27:06.285 { 00:27:06.285 "name": "ftl", 00:27:06.285 "uuid": "9de8945d-d487-4658-a5e5-cd03fde449c9" 00:27:06.285 } 00:27:06.543 14:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:06.801 [2024-07-26 14:32:26.311532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.801 14:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:07.060 14:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:07.319 [2024-07-26 14:32:26.868137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:07.319 14:32:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:07.578 [2024-07-26 14:32:27.129272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:07.578 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:07.837 Fill FTL, iteration 1 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:07.837 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84383 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84383 /var/tmp/spdk.tgt.sock 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84383 ']' 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:07.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:07.838 14:32:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:07.838 [2024-07-26 14:32:27.574225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:07.838 [2024-07-26 14:32:27.574579] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84383 ] 00:27:08.097 [2024-07-26 14:32:27.733058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.356 [2024-07-26 14:32:27.961972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.923 14:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:08.923 14:32:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:08.923 14:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:09.181 ftln1 00:27:09.181 14:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:09.182 14:32:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84383 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84383 ']' 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84383 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.440 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84383 00:27:09.708 killing process with pid 84383 00:27:09.708 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.708 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.708 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84383' 00:27:09.708 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84383 00:27:09.708 14:32:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84383 00:27:11.625 14:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:11.625 14:32:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:11.625 [2024-07-26 14:32:31.146401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:11.625 [2024-07-26 14:32:31.147117] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84438 ] 00:27:11.625 [2024-07-26 14:32:31.315479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.884 [2024-07-26 14:32:31.478579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.012  Copying: 210/1024 [MB] (210 MBps) Copying: 422/1024 [MB] (212 MBps) Copying: 633/1024 [MB] (211 MBps) Copying: 842/1024 [MB] (209 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:27:18.012 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:18.012 Calculate MD5 checksum, iteration 1 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.012 14:32:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.271 [2024-07-26 14:32:37.855230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:18.271 [2024-07-26 14:32:37.855377] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84508 ] 00:27:18.271 [2024-07-26 14:32:38.015567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.529 [2024-07-26 14:32:38.183064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.060  Copying: 480/1024 [MB] (480 MBps) Copying: 968/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 483 MBps) 00:27:22.060 00:27:22.060 14:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:22.060 14:32:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:23.961 Fill FTL, iteration 2 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9e2b2ca58c6d2bb9d6ad4b261452d6a4 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:23.961 14:32:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.961 [2024-07-26 14:32:43.658721] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:23.961 [2024-07-26 14:32:43.658890] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84565 ] 00:27:24.219 [2024-07-26 14:32:43.820307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.478 [2024-07-26 14:32:44.033212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.554  Copying: 213/1024 [MB] (213 MBps) Copying: 422/1024 [MB] (209 MBps) Copying: 633/1024 [MB] (211 MBps) Copying: 845/1024 [MB] (212 MBps) Copying: 1024/1024 [MB] (average 210 MBps) 00:27:30.554 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:30.816 Calculate MD5 checksum, iteration 2 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:30.816 14:32:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:30.816 [2024-07-26 14:32:50.424167] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:30.816 [2024-07-26 14:32:50.424326] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84635 ] 00:27:31.075 [2024-07-26 14:32:50.597878] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.075 [2024-07-26 14:32:50.779147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.287  Copying: 478/1024 [MB] (478 MBps) Copying: 977/1024 [MB] (499 MBps) Copying: 1024/1024 [MB] (average 487 MBps) 00:27:35.287 00:27:35.287 14:32:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:35.287 14:32:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:37.189 14:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:37.189 14:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7da4642d7d3d93aa061faa6b40ff408e 00:27:37.189 14:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:37.189 14:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:37.189 14:32:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:37.448 [2024-07-26 14:32:57.077032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.448 [2024-07-26 14:32:57.077083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:37.448 [2024-07-26 14:32:57.077112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:37.448 [2024-07-26 14:32:57.077137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.448 [2024-07-26 14:32:57.077188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.448 [2024-07-26 14:32:57.077209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:37.448 [2024-07-26 14:32:57.077227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:37.448 [2024-07-26 14:32:57.077243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.448 [2024-07-26 14:32:57.077299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:37.448 [2024-07-26 14:32:57.077320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:37.448 [2024-07-26 14:32:57.077338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:37.448 [2024-07-26 14:32:57.077354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:37.448 [2024-07-26 14:32:57.077453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.420 ms, result 0 00:27:37.448 true 00:27:37.448 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:37.720 { 00:27:37.720 "name": "ftl", 00:27:37.720 "properties": [ 00:27:37.720 { 00:27:37.720 "name": "superblock_version", 00:27:37.720 "value": 5, 00:27:37.720 "read-only": true 00:27:37.720 }, 00:27:37.720 { 00:27:37.720 "name": "base_device", 00:27:37.720 "bands": [ 00:27:37.720 { 00:27:37.720 "id": 0, 00:27:37.720 "state": "FREE", 00:27:37.720 "validity": 0.0 00:27:37.720 }, 00:27:37.720 { 00:27:37.720 "id": 1, 00:27:37.720 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 2, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 3, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 4, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 5, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 6, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 7, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 8, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 9, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 10, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 11, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 12, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 13, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 14, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 15, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 16, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 17, 00:27:37.721 "state": "FREE", 00:27:37.721 "validity": 0.0 00:27:37.721 } 00:27:37.721 ], 00:27:37.721 "read-only": true 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "name": "cache_device", 00:27:37.721 "type": "bdev", 00:27:37.721 "chunks": [ 00:27:37.721 { 00:27:37.721 "id": 0, 00:27:37.721 "state": "INACTIVE", 00:27:37.721 "utilization": 0.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 1, 00:27:37.721 "state": "CLOSED", 00:27:37.721 "utilization": 1.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 2, 00:27:37.721 "state": "CLOSED", 00:27:37.721 "utilization": 1.0 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 3, 00:27:37.721 "state": "OPEN", 00:27:37.721 "utilization": 0.001953125 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "id": 4, 00:27:37.721 "state": "OPEN", 00:27:37.721 "utilization": 0.0 00:27:37.721 } 00:27:37.721 ], 00:27:37.721 "read-only": true 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "name": "verbose_mode", 00:27:37.721 "value": true, 00:27:37.721 "unit": "", 00:27:37.721 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:37.721 }, 00:27:37.721 { 00:27:37.721 "name": "prep_upgrade_on_shutdown", 00:27:37.721 "value": false, 00:27:37.721 "unit": "", 00:27:37.721 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:37.721 } 00:27:37.721 ] 00:27:37.721 } 00:27:37.721 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:38.006 [2024-07-26 14:32:57.660871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.006 [2024-07-26 14:32:57.660962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.006 [2024-07-26 14:32:57.660992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:38.006 [2024-07-26 14:32:57.661008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.006 [2024-07-26 14:32:57.661070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.006 [2024-07-26 14:32:57.661091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.006 [2024-07-26 14:32:57.661107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.006 [2024-07-26 14:32:57.661123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.006 [2024-07-26 14:32:57.661195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.006 [2024-07-26 14:32:57.661218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.006 [2024-07-26 14:32:57.661236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:38.006 [2024-07-26 14:32:57.661253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.006 [2024-07-26 14:32:57.661419] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.450 ms, result 0 00:27:38.006 true 00:27:38.006 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:38.006 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.006 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:38.265 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:38.265 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:38.265 14:32:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:38.523 [2024-07-26 14:32:58.148129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.523 [2024-07-26 14:32:58.148183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:38.523 [2024-07-26 14:32:58.148214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:38.523 [2024-07-26 14:32:58.148232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.523 [2024-07-26 14:32:58.148279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.524 [2024-07-26 14:32:58.148303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:38.524 [2024-07-26 14:32:58.148335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:38.524 [2024-07-26 14:32:58.148379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.524 [2024-07-26 14:32:58.148419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:38.524 [2024-07-26 14:32:58.148454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:38.524 [2024-07-26 14:32:58.148472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:38.524 [2024-07-26 14:32:58.148487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:38.524 [2024-07-26 14:32:58.148623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.448 ms, result 0 00:27:38.524 true 00:27:38.524 14:32:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:38.783 { 00:27:38.783 "name": "ftl", 00:27:38.783 "properties": [ 00:27:38.783 { 00:27:38.783 "name": "superblock_version", 00:27:38.783 "value": 5, 00:27:38.783 "read-only": true 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "name": "base_device", 00:27:38.783 "bands": [ 00:27:38.783 { 00:27:38.783 "id": 0, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 1, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 2, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 3, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 4, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 5, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 6, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 7, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 8, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 9, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 10, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 11, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 12, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 13, 00:27:38.783 "state": "FREE", 00:27:38.783 "validity": 0.0 00:27:38.783 }, 00:27:38.783 { 00:27:38.783 "id": 14, 00:27:38.784 "state": "FREE", 00:27:38.784 "validity": 0.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 15, 00:27:38.784 "state": "FREE", 00:27:38.784 "validity": 0.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 16, 00:27:38.784 "state": "FREE", 00:27:38.784 "validity": 0.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 17, 00:27:38.784 "state": "FREE", 00:27:38.784 "validity": 0.0 00:27:38.784 } 00:27:38.784 ], 00:27:38.784 "read-only": true 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "name": "cache_device", 00:27:38.784 "type": "bdev", 00:27:38.784 "chunks": [ 00:27:38.784 { 00:27:38.784 "id": 0, 00:27:38.784 "state": "INACTIVE", 00:27:38.784 "utilization": 0.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 1, 00:27:38.784 "state": "CLOSED", 00:27:38.784 "utilization": 1.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 2, 00:27:38.784 "state": "CLOSED", 00:27:38.784 "utilization": 1.0 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 3, 00:27:38.784 "state": "OPEN", 00:27:38.784 "utilization": 0.001953125 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "id": 4, 00:27:38.784 "state": "OPEN", 00:27:38.784 "utilization": 0.0 00:27:38.784 } 00:27:38.784 ], 00:27:38.784 "read-only": true 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "name": "verbose_mode", 00:27:38.784 "value": true, 00:27:38.784 "unit": "", 00:27:38.784 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:38.784 }, 00:27:38.784 { 00:27:38.784 "name": "prep_upgrade_on_shutdown", 00:27:38.784 "value": true, 00:27:38.784 "unit": "", 00:27:38.784 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:38.784 } 00:27:38.784 ] 00:27:38.784 } 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84263 ]] 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84263 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84263 ']' 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84263 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84263 00:27:38.784 killing process with pid 84263 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84263' 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84263 00:27:38.784 14:32:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84263 00:27:39.721 [2024-07-26 14:32:59.287560] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:39.721 [2024-07-26 14:32:59.305443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.721 [2024-07-26 14:32:59.305494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:39.721 [2024-07-26 14:32:59.305525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:39.721 [2024-07-26 14:32:59.305546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.721 [2024-07-26 14:32:59.305590] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:39.721 [2024-07-26 14:32:59.308930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.721 [2024-07-26 14:32:59.308998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:39.721 [2024-07-26 14:32:59.309033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.310 ms 00:27:39.721 [2024-07-26 14:32:59.309051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.888507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.888572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:49.699 [2024-07-26 14:33:07.888609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8579.453 ms 00:27:49.699 [2024-07-26 14:33:07.888627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.890051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.890108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:49.699 [2024-07-26 14:33:07.890136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.392 ms 00:27:49.699 [2024-07-26 14:33:07.890158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.891457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.891500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:49.699 [2024-07-26 14:33:07.891533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.212 ms 00:27:49.699 [2024-07-26 14:33:07.891551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.902799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.902854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:49.699 [2024-07-26 14:33:07.902881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.167 ms 00:27:49.699 [2024-07-26 14:33:07.902934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.910670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.910721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:49.699 [2024-07-26 14:33:07.910747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.677 ms 00:27:49.699 [2024-07-26 14:33:07.910766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.910953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.910991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:49.699 [2024-07-26 14:33:07.911012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.130 ms 00:27:49.699 [2024-07-26 14:33:07.911046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.922602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.922644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:49.699 [2024-07-26 14:33:07.922684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.524 ms 00:27:49.699 [2024-07-26 14:33:07.922704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.934755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.934820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:49.699 [2024-07-26 14:33:07.934847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.983 ms 00:27:49.699 [2024-07-26 14:33:07.934865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.947924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.948177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:49.699 [2024-07-26 14:33:07.948223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.981 ms 00:27:49.699 [2024-07-26 14:33:07.948248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.961051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.961093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:49.699 [2024-07-26 14:33:07.961118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.651 ms 00:27:49.699 [2024-07-26 14:33:07.961145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.961202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:49.699 [2024-07-26 14:33:07.961241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.699 [2024-07-26 14:33:07.961265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:49.699 [2024-07-26 14:33:07.961285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:49.699 [2024-07-26 14:33:07.961305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.699 [2024-07-26 14:33:07.961626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:49.699 [2024-07-26 14:33:07.961645] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9de8945d-d487-4658-a5e5-cd03fde449c9 00:27:49.699 [2024-07-26 14:33:07.961664] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:49.699 [2024-07-26 14:33:07.961681] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:49.699 [2024-07-26 14:33:07.961699] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:49.699 [2024-07-26 14:33:07.961717] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:49.699 [2024-07-26 14:33:07.961744] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:49.699 [2024-07-26 14:33:07.961790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:49.699 [2024-07-26 14:33:07.961807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:49.699 [2024-07-26 14:33:07.961823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:49.699 [2024-07-26 14:33:07.961840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:49.699 [2024-07-26 14:33:07.961857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.961874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:49.699 [2024-07-26 14:33:07.961893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.658 ms 00:27:49.699 [2024-07-26 14:33:07.961926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.978351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.978391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:49.699 [2024-07-26 14:33:07.978436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.328 ms 00:27:49.699 [2024-07-26 14:33:07.978454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:07.979039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.699 [2024-07-26 14:33:07.979079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:49.699 [2024-07-26 14:33:07.979105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.526 ms 00:27:49.699 [2024-07-26 14:33:07.979125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:08.030745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.699 [2024-07-26 14:33:08.030834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:49.699 [2024-07-26 14:33:08.030860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.699 [2024-07-26 14:33:08.030877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.699 [2024-07-26 14:33:08.030999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.699 [2024-07-26 14:33:08.031040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:49.700 [2024-07-26 14:33:08.031060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.031076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.031277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.031320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:49.700 [2024-07-26 14:33:08.031375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.031409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.031449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.031483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:49.700 [2024-07-26 14:33:08.031506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.031525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.118789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.118861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:49.700 [2024-07-26 14:33:08.118888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.118939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.194446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.194508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:49.700 [2024-07-26 14:33:08.194535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.194552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.194706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.194732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:49.700 [2024-07-26 14:33:08.194751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.194776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.194858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.194883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:49.700 [2024-07-26 14:33:08.194964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.194985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.195143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.195169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:49.700 [2024-07-26 14:33:08.195189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.195206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.195324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.195363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:49.700 [2024-07-26 14:33:08.195382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.195401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.195466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.195498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:49.700 [2024-07-26 14:33:08.195519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.195536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.195623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:49.700 [2024-07-26 14:33:08.195661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:49.700 [2024-07-26 14:33:08.195681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:49.700 [2024-07-26 14:33:08.195698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.700 [2024-07-26 14:33:08.195934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8890.478 ms, result 0 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84845 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84845 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84845 ']' 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:51.601 14:33:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.601 [2024-07-26 14:33:11.361974] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:51.601 [2024-07-26 14:33:11.362171] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84845 ] 00:27:51.860 [2024-07-26 14:33:11.523522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.120 [2024-07-26 14:33:11.697121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.688 [2024-07-26 14:33:12.398237] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:52.688 [2024-07-26 14:33:12.398322] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:52.948 [2024-07-26 14:33:12.543822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.543866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:52.948 [2024-07-26 14:33:12.543902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:52.948 [2024-07-26 14:33:12.543959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.544047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.544080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:52.948 [2024-07-26 14:33:12.544109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:27:52.948 [2024-07-26 14:33:12.544120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.544159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:52.948 [2024-07-26 14:33:12.545125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:52.948 [2024-07-26 14:33:12.545163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.545192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:52.948 [2024-07-26 14:33:12.545203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.016 ms 00:27:52.948 [2024-07-26 14:33:12.545218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.546354] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:52.948 [2024-07-26 14:33:12.559996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.560070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:52.948 [2024-07-26 14:33:12.560104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.644 ms 00:27:52.948 [2024-07-26 14:33:12.560115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.560192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.560211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:52.948 [2024-07-26 14:33:12.560223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:52.948 [2024-07-26 14:33:12.560233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.564277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.564316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:52.948 [2024-07-26 14:33:12.564361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.933 ms 00:27:52.948 [2024-07-26 14:33:12.564372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.564471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.564489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:52.948 [2024-07-26 14:33:12.564504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:27:52.948 [2024-07-26 14:33:12.564515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.564573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.564588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:52.948 [2024-07-26 14:33:12.564599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:52.948 [2024-07-26 14:33:12.564610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.564643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:52.948 [2024-07-26 14:33:12.568459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.568502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:52.948 [2024-07-26 14:33:12.568519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.823 ms 00:27:52.948 [2024-07-26 14:33:12.568531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.568585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.568599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:52.948 [2024-07-26 14:33:12.568613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:52.948 [2024-07-26 14:33:12.568624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.568670] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:52.948 [2024-07-26 14:33:12.568698] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:52.948 [2024-07-26 14:33:12.568751] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:52.948 [2024-07-26 14:33:12.568784] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:27:52.948 [2024-07-26 14:33:12.568869] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:52.948 [2024-07-26 14:33:12.568887] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:52.948 [2024-07-26 14:33:12.568900] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:27:52.948 [2024-07-26 14:33:12.568912] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:52.948 [2024-07-26 14:33:12.568924] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:52.948 [2024-07-26 14:33:12.568935] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:52.948 [2024-07-26 14:33:12.568984] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:52.948 [2024-07-26 14:33:12.568995] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:52.948 [2024-07-26 14:33:12.569005] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:52.948 [2024-07-26 14:33:12.569015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.569025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:52.948 [2024-07-26 14:33:12.569036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.349 ms 00:27:52.948 [2024-07-26 14:33:12.569050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.569136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.948 [2024-07-26 14:33:12.569155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:52.948 [2024-07-26 14:33:12.569166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:52.948 [2024-07-26 14:33:12.569176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.948 [2024-07-26 14:33:12.569270] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:52.949 [2024-07-26 14:33:12.569286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:52.949 [2024-07-26 14:33:12.569297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569308] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569338] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:52.949 [2024-07-26 14:33:12.569347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569357] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:52.949 [2024-07-26 14:33:12.569366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:52.949 [2024-07-26 14:33:12.569376] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:52.949 [2024-07-26 14:33:12.569385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:52.949 [2024-07-26 14:33:12.569403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:52.949 [2024-07-26 14:33:12.569412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569421] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:52.949 [2024-07-26 14:33:12.569430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:52.949 [2024-07-26 14:33:12.569439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:52.949 [2024-07-26 14:33:12.569457] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:52.949 [2024-07-26 14:33:12.569465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:52.949 [2024-07-26 14:33:12.569484] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569493] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569502] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:52.949 [2024-07-26 14:33:12.569511] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569519] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:52.949 [2024-07-26 14:33:12.569537] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569554] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:52.949 [2024-07-26 14:33:12.569563] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569572] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:52.949 [2024-07-26 14:33:12.569591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569599] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569608] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:52.949 [2024-07-26 14:33:12.569617] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:52.949 [2024-07-26 14:33:12.569648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:52.949 [2024-07-26 14:33:12.569675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:52.949 [2024-07-26 14:33:12.569684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569692] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:52.949 [2024-07-26 14:33:12.569703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:52.949 [2024-07-26 14:33:12.569713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569723] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:52.949 [2024-07-26 14:33:12.569733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:52.949 [2024-07-26 14:33:12.569742] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:52.949 [2024-07-26 14:33:12.569751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:52.949 [2024-07-26 14:33:12.569760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:52.949 [2024-07-26 14:33:12.569780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:52.949 [2024-07-26 14:33:12.569790] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:52.949 [2024-07-26 14:33:12.569800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:52.949 [2024-07-26 14:33:12.569812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:52.949 [2024-07-26 14:33:12.569833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:52.949 [2024-07-26 14:33:12.569862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:52.949 [2024-07-26 14:33:12.569872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:52.949 [2024-07-26 14:33:12.569881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:52.949 [2024-07-26 14:33:12.569891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:52.949 [2024-07-26 14:33:12.569974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:52.949 [2024-07-26 14:33:12.569987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.569998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:52.949 [2024-07-26 14:33:12.570009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:52.949 [2024-07-26 14:33:12.570019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:52.949 [2024-07-26 14:33:12.570029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:52.949 [2024-07-26 14:33:12.570040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.949 [2024-07-26 14:33:12.570050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:52.949 [2024-07-26 14:33:12.570060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.824 ms 00:27:52.949 [2024-07-26 14:33:12.570074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.949 [2024-07-26 14:33:12.570127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:52.949 [2024-07-26 14:33:12.570143] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:55.480 [2024-07-26 14:33:14.740609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.740676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:55.480 [2024-07-26 14:33:14.740712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2170.496 ms 00:27:55.480 [2024-07-26 14:33:14.740736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.768693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.768753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:55.480 [2024-07-26 14:33:14.768789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.718 ms 00:27:55.480 [2024-07-26 14:33:14.768799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.768984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.769004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:55.480 [2024-07-26 14:33:14.769017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:55.480 [2024-07-26 14:33:14.769028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.801995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.802045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:55.480 [2024-07-26 14:33:14.802079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.889 ms 00:27:55.480 [2024-07-26 14:33:14.802089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.802151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.802166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:55.480 [2024-07-26 14:33:14.802177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:55.480 [2024-07-26 14:33:14.802188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.802543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.802561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:55.480 [2024-07-26 14:33:14.802573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:27:55.480 [2024-07-26 14:33:14.802584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.802634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.802649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:55.480 [2024-07-26 14:33:14.802660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:27:55.480 [2024-07-26 14:33:14.802670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.480 [2024-07-26 14:33:14.818012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.480 [2024-07-26 14:33:14.818051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:55.481 [2024-07-26 14:33:14.818083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.316 ms 00:27:55.481 [2024-07-26 14:33:14.818093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.832124] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:55.481 [2024-07-26 14:33:14.832166] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:55.481 [2024-07-26 14:33:14.832201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.832213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:55.481 [2024-07-26 14:33:14.832226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.981 ms 00:27:55.481 [2024-07-26 14:33:14.832237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.847704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.847741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:55.481 [2024-07-26 14:33:14.847773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.422 ms 00:27:55.481 [2024-07-26 14:33:14.847784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.861050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.861086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:55.481 [2024-07-26 14:33:14.861116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.215 ms 00:27:55.481 [2024-07-26 14:33:14.861127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.874229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.874265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:55.481 [2024-07-26 14:33:14.874295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.060 ms 00:27:55.481 [2024-07-26 14:33:14.874305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.875154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.875204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:55.481 [2024-07-26 14:33:14.875224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.708 ms 00:27:55.481 [2024-07-26 14:33:14.875236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.945159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.945225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:55.481 [2024-07-26 14:33:14.945260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.894 ms 00:27:55.481 [2024-07-26 14:33:14.945271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.955956] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:55.481 [2024-07-26 14:33:14.956719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.956751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:55.481 [2024-07-26 14:33:14.956772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.379 ms 00:27:55.481 [2024-07-26 14:33:14.956783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.956900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.956956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:55.481 [2024-07-26 14:33:14.956970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:55.481 [2024-07-26 14:33:14.956981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.957112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.957131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:55.481 [2024-07-26 14:33:14.957158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:55.481 [2024-07-26 14:33:14.957175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.957208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.957222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:55.481 [2024-07-26 14:33:14.957234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:55.481 [2024-07-26 14:33:14.957261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.957348] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:55.481 [2024-07-26 14:33:14.957365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.957376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:55.481 [2024-07-26 14:33:14.957388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:55.481 [2024-07-26 14:33:14.957398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.982846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.982882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:55.481 [2024-07-26 14:33:14.982941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.413 ms 00:27:55.481 [2024-07-26 14:33:14.982954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.983045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-07-26 14:33:14.983062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:55.481 [2024-07-26 14:33:14.983074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:27:55.481 [2024-07-26 14:33:14.983092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-07-26 14:33:14.984556] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2440.088 ms, result 0 00:27:55.481 [2024-07-26 14:33:14.999237] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.481 [2024-07-26 14:33:15.015243] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:55.481 [2024-07-26 14:33:15.023380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:56.052 14:33:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:56.052 14:33:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:27:56.052 14:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:56.052 14:33:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:56.052 14:33:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:56.311 [2024-07-26 14:33:15.988375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.311 [2024-07-26 14:33:15.988451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:56.311 [2024-07-26 14:33:15.988472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:56.311 [2024-07-26 14:33:15.988483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.311 [2024-07-26 14:33:15.988517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.311 [2024-07-26 14:33:15.988531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:56.311 [2024-07-26 14:33:15.988542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:56.311 [2024-07-26 14:33:15.988552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.311 [2024-07-26 14:33:15.988577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:56.311 [2024-07-26 14:33:15.988590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:56.311 [2024-07-26 14:33:15.988601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:56.311 [2024-07-26 14:33:15.988616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:56.311 [2024-07-26 14:33:15.988682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.318 ms, result 0 00:27:56.311 true 00:27:56.311 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.571 { 00:27:56.571 "name": "ftl", 00:27:56.571 "properties": [ 00:27:56.571 { 00:27:56.571 "name": "superblock_version", 00:27:56.571 "value": 5, 00:27:56.571 "read-only": true 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "name": "base_device", 00:27:56.571 "bands": [ 00:27:56.571 { 00:27:56.571 "id": 0, 00:27:56.571 "state": "CLOSED", 00:27:56.571 "validity": 1.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 1, 00:27:56.571 "state": "CLOSED", 00:27:56.571 "validity": 1.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 2, 00:27:56.571 "state": "CLOSED", 00:27:56.571 "validity": 0.007843137254901933 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 3, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 4, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 5, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 6, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 7, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 8, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 9, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 10, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 11, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 12, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 13, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 14, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 15, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 16, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 17, 00:27:56.571 "state": "FREE", 00:27:56.571 "validity": 0.0 00:27:56.571 } 00:27:56.571 ], 00:27:56.571 "read-only": true 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "name": "cache_device", 00:27:56.571 "type": "bdev", 00:27:56.571 "chunks": [ 00:27:56.571 { 00:27:56.571 "id": 0, 00:27:56.571 "state": "INACTIVE", 00:27:56.571 "utilization": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 1, 00:27:56.571 "state": "OPEN", 00:27:56.571 "utilization": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 2, 00:27:56.571 "state": "OPEN", 00:27:56.571 "utilization": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 3, 00:27:56.571 "state": "FREE", 00:27:56.571 "utilization": 0.0 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "id": 4, 00:27:56.571 "state": "FREE", 00:27:56.571 "utilization": 0.0 00:27:56.571 } 00:27:56.571 ], 00:27:56.571 "read-only": true 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "name": "verbose_mode", 00:27:56.571 "value": true, 00:27:56.571 "unit": "", 00:27:56.571 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:56.571 }, 00:27:56.571 { 00:27:56.571 "name": "prep_upgrade_on_shutdown", 00:27:56.571 "value": false, 00:27:56.571 "unit": "", 00:27:56.571 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:56.571 } 00:27:56.571 ] 00:27:56.571 } 00:27:56.571 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:56.571 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:56.571 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.831 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:56.831 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:56.831 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:56.831 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:56.831 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:57.090 Validate MD5 checksum, iteration 1 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:57.090 14:33:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:57.090 [2024-07-26 14:33:16.839691] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:57.090 [2024-07-26 14:33:16.840160] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84914 ] 00:27:57.360 [2024-07-26 14:33:16.993054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.633 [2024-07-26 14:33:17.169744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.322  Copying: 503/1024 [MB] (503 MBps) Copying: 977/1024 [MB] (474 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:28:01.322 00:28:01.322 14:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:01.322 14:33:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.855 Validate MD5 checksum, iteration 2 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9e2b2ca58c6d2bb9d6ad4b261452d6a4 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9e2b2ca58c6d2bb9d6ad4b261452d6a4 != \9\e\2\b\2\c\a\5\8\c\6\d\2\b\b\9\d\6\a\d\4\b\2\6\1\4\5\2\d\6\a\4 ]] 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:03.855 14:33:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:03.855 [2024-07-26 14:33:23.090479] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:03.855 [2024-07-26 14:33:23.090649] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84982 ] 00:28:03.855 [2024-07-26 14:33:23.263971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.855 [2024-07-26 14:33:23.463062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.502  Copying: 478/1024 [MB] (478 MBps) Copying: 955/1024 [MB] (477 MBps) Copying: 1024/1024 [MB] (average 479 MBps) 00:28:08.502 00:28:08.502 14:33:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:08.502 14:33:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7da4642d7d3d93aa061faa6b40ff408e 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7da4642d7d3d93aa061faa6b40ff408e != \7\d\a\4\6\4\2\d\7\d\3\d\9\3\a\a\0\6\1\f\a\a\6\b\4\0\f\f\4\0\8\e ]] 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84845 ]] 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84845 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:10.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85055 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85055 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85055 ']' 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.406 14:33:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:10.664 [2024-07-26 14:33:30.209051] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:10.665 [2024-07-26 14:33:30.209218] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85055 ] 00:28:10.665 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 84845 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:10.665 [2024-07-26 14:33:30.374957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.923 [2024-07-26 14:33:30.542047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.491 [2024-07-26 14:33:31.243097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.491 [2024-07-26 14:33:31.243183] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:11.750 [2024-07-26 14:33:31.389978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.750 [2024-07-26 14:33:31.390047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:11.750 [2024-07-26 14:33:31.390083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:11.750 [2024-07-26 14:33:31.390094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.390158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.390176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:11.751 [2024-07-26 14:33:31.390187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:28:11.751 [2024-07-26 14:33:31.390197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.390230] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:11.751 [2024-07-26 14:33:31.391174] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:11.751 [2024-07-26 14:33:31.391217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.391232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:11.751 [2024-07-26 14:33:31.391243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.997 ms 00:28:11.751 [2024-07-26 14:33:31.391260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.391733] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:11.751 [2024-07-26 14:33:31.409264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.409311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:11.751 [2024-07-26 14:33:31.409349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.531 ms 00:28:11.751 [2024-07-26 14:33:31.409359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.419146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.419185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:11.751 [2024-07-26 14:33:31.419215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:11.751 [2024-07-26 14:33:31.419225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.419661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.419688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:11.751 [2024-07-26 14:33:31.419701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:28:11.751 [2024-07-26 14:33:31.419710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.419788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.419836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:11.751 [2024-07-26 14:33:31.419864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:11.751 [2024-07-26 14:33:31.419874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.419914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.419928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:11.751 [2024-07-26 14:33:31.419942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:11.751 [2024-07-26 14:33:31.419952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.420054] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:11.751 [2024-07-26 14:33:31.423658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.423709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:11.751 [2024-07-26 14:33:31.423741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.637 ms 00:28:11.751 [2024-07-26 14:33:31.423751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.423794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.423810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:11.751 [2024-07-26 14:33:31.423821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:11.751 [2024-07-26 14:33:31.423831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.423876] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:11.751 [2024-07-26 14:33:31.423920] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:11.751 [2024-07-26 14:33:31.424003] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:11.751 [2024-07-26 14:33:31.424060] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:28:11.751 [2024-07-26 14:33:31.424167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:11.751 [2024-07-26 14:33:31.424192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:11.751 [2024-07-26 14:33:31.424206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:28:11.751 [2024-07-26 14:33:31.424221] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424233] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424245] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:11.751 [2024-07-26 14:33:31.424262] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:11.751 [2024-07-26 14:33:31.424272] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:11.751 [2024-07-26 14:33:31.424298] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:11.751 [2024-07-26 14:33:31.424310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.424340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:11.751 [2024-07-26 14:33:31.424351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.436 ms 00:28:11.751 [2024-07-26 14:33:31.424362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.424463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.751 [2024-07-26 14:33:31.424477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:11.751 [2024-07-26 14:33:31.424488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:28:11.751 [2024-07-26 14:33:31.424503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.751 [2024-07-26 14:33:31.424610] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:11.751 [2024-07-26 14:33:31.424633] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:11.751 [2024-07-26 14:33:31.424646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424668] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:11.751 [2024-07-26 14:33:31.424678] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:11.751 [2024-07-26 14:33:31.424697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:11.751 [2024-07-26 14:33:31.424707] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:11.751 [2024-07-26 14:33:31.424717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:11.751 [2024-07-26 14:33:31.424736] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:11.751 [2024-07-26 14:33:31.424746] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:11.751 [2024-07-26 14:33:31.424765] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:11.751 [2024-07-26 14:33:31.424776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:11.751 [2024-07-26 14:33:31.424795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:11.751 [2024-07-26 14:33:31.424805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:11.751 [2024-07-26 14:33:31.424825] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:11.751 [2024-07-26 14:33:31.424834] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:11.751 [2024-07-26 14:33:31.424854] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:11.751 [2024-07-26 14:33:31.424863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424873] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:11.751 [2024-07-26 14:33:31.424882] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:11.751 [2024-07-26 14:33:31.424892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:11.751 [2024-07-26 14:33:31.424925] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:11.751 [2024-07-26 14:33:31.424934] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:11.751 [2024-07-26 14:33:31.424943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:11.751 [2024-07-26 14:33:31.424953] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:11.751 [2024-07-26 14:33:31.424962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.424988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:11.751 [2024-07-26 14:33:31.425000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:11.751 [2024-07-26 14:33:31.425009] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.425034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:11.751 [2024-07-26 14:33:31.425044] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:11.751 [2024-07-26 14:33:31.425053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.751 [2024-07-26 14:33:31.425061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:11.752 [2024-07-26 14:33:31.425070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:11.752 [2024-07-26 14:33:31.425079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.752 [2024-07-26 14:33:31.425088] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:11.752 [2024-07-26 14:33:31.425098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:11.752 [2024-07-26 14:33:31.425108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:11.752 [2024-07-26 14:33:31.425117] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:11.752 [2024-07-26 14:33:31.425129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:11.752 [2024-07-26 14:33:31.425138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:11.752 [2024-07-26 14:33:31.425160] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:11.752 [2024-07-26 14:33:31.425170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:11.752 [2024-07-26 14:33:31.425179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:11.752 [2024-07-26 14:33:31.425188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:11.752 [2024-07-26 14:33:31.425200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:11.752 [2024-07-26 14:33:31.425217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:11.752 [2024-07-26 14:33:31.425238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:11.752 [2024-07-26 14:33:31.425268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:11.752 [2024-07-26 14:33:31.425278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:11.752 [2024-07-26 14:33:31.425288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:11.752 [2024-07-26 14:33:31.425298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:11.752 [2024-07-26 14:33:31.425370] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:11.752 [2024-07-26 14:33:31.425381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:11.752 [2024-07-26 14:33:31.425401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:11.752 [2024-07-26 14:33:31.425411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:11.752 [2024-07-26 14:33:31.425421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:11.752 [2024-07-26 14:33:31.425432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.425443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:11.752 [2024-07-26 14:33:31.425453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.885 ms 00:28:11.752 [2024-07-26 14:33:31.425463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.454445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.454776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:11.752 [2024-07-26 14:33:31.454944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.913 ms 00:28:11.752 [2024-07-26 14:33:31.455075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.455194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.455298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:11.752 [2024-07-26 14:33:31.455446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:11.752 [2024-07-26 14:33:31.455583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.489771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.490172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:11.752 [2024-07-26 14:33:31.490312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.015 ms 00:28:11.752 [2024-07-26 14:33:31.490367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.490570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.490665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:11.752 [2024-07-26 14:33:31.490859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:11.752 [2024-07-26 14:33:31.490960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.491247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.491468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:11.752 [2024-07-26 14:33:31.491583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.081 ms 00:28:11.752 [2024-07-26 14:33:31.491636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.491789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.491960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:11.752 [2024-07-26 14:33:31.492122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:11.752 [2024-07-26 14:33:31.492179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.509596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.509968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:11.752 [2024-07-26 14:33:31.510111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.283 ms 00:28:11.752 [2024-07-26 14:33:31.510240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:11.752 [2024-07-26 14:33:31.510467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:11.752 [2024-07-26 14:33:31.510526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:11.752 [2024-07-26 14:33:31.510640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:11.752 [2024-07-26 14:33:31.510692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.011 [2024-07-26 14:33:31.547755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.011 [2024-07-26 14:33:31.548089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:12.012 [2024-07-26 14:33:31.548125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.886 ms 00:28:12.012 [2024-07-26 14:33:31.548139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.559336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.559385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:12.012 [2024-07-26 14:33:31.559417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:28:12.012 [2024-07-26 14:33:31.559427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.627803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.627865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:12.012 [2024-07-26 14:33:31.627902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 68.288 ms 00:28:12.012 [2024-07-26 14:33:31.627945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.628221] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:12.012 [2024-07-26 14:33:31.628407] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:12.012 [2024-07-26 14:33:31.628559] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:12.012 [2024-07-26 14:33:31.628691] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:12.012 [2024-07-26 14:33:31.628705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.628717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:12.012 [2024-07-26 14:33:31.628734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.632 ms 00:28:12.012 [2024-07-26 14:33:31.628745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.628877] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:12.012 [2024-07-26 14:33:31.628898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.628909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:12.012 [2024-07-26 14:33:31.628921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:12.012 [2024-07-26 14:33:31.628932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.648625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.648674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:12.012 [2024-07-26 14:33:31.648707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.628 ms 00:28:12.012 [2024-07-26 14:33:31.648718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.659398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.012 [2024-07-26 14:33:31.659438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:12.012 [2024-07-26 14:33:31.659469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:12.012 [2024-07-26 14:33:31.659484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.012 [2024-07-26 14:33:31.659698] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:12.578 [2024-07-26 14:33:32.232753] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:12.578 [2024-07-26 14:33:32.233045] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:13.145 [2024-07-26 14:33:32.814294] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:13.145 [2024-07-26 14:33:32.814442] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:13.145 [2024-07-26 14:33:32.814480] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:13.145 [2024-07-26 14:33:32.814496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.814508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:13.145 [2024-07-26 14:33:32.814537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1154.922 ms 00:28:13.145 [2024-07-26 14:33:32.814562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.814617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.814630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:13.145 [2024-07-26 14:33:32.814640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.145 [2024-07-26 14:33:32.814650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.825951] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:13.145 [2024-07-26 14:33:32.826101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.826118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:13.145 [2024-07-26 14:33:32.826130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.423 ms 00:28:13.145 [2024-07-26 14:33:32.826140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.826811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.826840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:13.145 [2024-07-26 14:33:32.826854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.574 ms 00:28:13.145 [2024-07-26 14:33:32.826864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:13.145 [2024-07-26 14:33:32.829357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.406 ms 00:28:13.145 [2024-07-26 14:33:32.829367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:13.145 [2024-07-26 14:33:32.829436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:13.145 [2024-07-26 14:33:32.829445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:13.145 [2024-07-26 14:33:32.829592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:13.145 [2024-07-26 14:33:32.829601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:13.145 [2024-07-26 14:33:32.829647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:13.145 [2024-07-26 14:33:32.829656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829692] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:13.145 [2024-07-26 14:33:32.829706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:13.145 [2024-07-26 14:33:32.829729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:13.145 [2024-07-26 14:33:32.829738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.145 [2024-07-26 14:33:32.829790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:13.145 [2024-07-26 14:33:32.829803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:13.145 [2024-07-26 14:33:32.829813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:13.145 [2024-07-26 14:33:32.829822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:13.146 [2024-07-26 14:33:32.831049] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1440.524 ms, result 0 00:28:13.146 [2024-07-26 14:33:32.846516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.146 [2024-07-26 14:33:32.862547] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:13.146 [2024-07-26 14:33:32.871153] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:13.409 Validate MD5 checksum, iteration 1 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:13.409 14:33:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:13.409 [2024-07-26 14:33:33.016837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:13.409 [2024-07-26 14:33:33.017528] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85090 ] 00:28:13.696 [2024-07-26 14:33:33.192484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.696 [2024-07-26 14:33:33.401854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.897  Copying: 487/1024 [MB] (487 MBps) Copying: 975/1024 [MB] (488 MBps) Copying: 1024/1024 [MB] (average 486 MBps) 00:28:17.897 00:28:17.897 14:33:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:17.897 14:33:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:19.800 Validate MD5 checksum, iteration 2 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9e2b2ca58c6d2bb9d6ad4b261452d6a4 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9e2b2ca58c6d2bb9d6ad4b261452d6a4 != \9\e\2\b\2\c\a\5\8\c\6\d\2\b\b\9\d\6\a\d\4\b\2\6\1\4\5\2\d\6\a\4 ]] 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:19.800 14:33:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:19.800 [2024-07-26 14:33:39.393728] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:19.800 [2024-07-26 14:33:39.393942] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85157 ] 00:28:20.058 [2024-07-26 14:33:39.570467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.058 [2024-07-26 14:33:39.794297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.270  Copying: 489/1024 [MB] (489 MBps) Copying: 971/1024 [MB] (482 MBps) Copying: 1024/1024 [MB] (average 485 MBps) 00:28:24.270 00:28:24.270 14:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:24.270 14:33:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7da4642d7d3d93aa061faa6b40ff408e 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7da4642d7d3d93aa061faa6b40ff408e != \7\d\a\4\6\4\2\d\7\d\3\d\9\3\a\a\0\6\1\f\a\a\6\b\4\0\f\f\4\0\8\e ]] 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85055 ]] 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85055 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85055 ']' 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85055 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85055 00:28:26.172 killing process with pid 85055 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:26.172 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85055' 00:28:26.173 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85055 00:28:26.173 14:33:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85055 00:28:27.109 [2024-07-26 14:33:46.732485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:27.109 [2024-07-26 14:33:46.748501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.748566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:27.109 [2024-07-26 14:33:46.748617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:27.109 [2024-07-26 14:33:46.748643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.748670] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:27.109 [2024-07-26 14:33:46.752166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.752207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:27.109 [2024-07-26 14:33:46.752224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.476 ms 00:28:27.109 [2024-07-26 14:33:46.752235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.752487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.752506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:27.109 [2024-07-26 14:33:46.752518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.223 ms 00:28:27.109 [2024-07-26 14:33:46.752529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.753816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.753850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:27.109 [2024-07-26 14:33:46.753881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.266 ms 00:28:27.109 [2024-07-26 14:33:46.753898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.755204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.755235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:27.109 [2024-07-26 14:33:46.755279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.234 ms 00:28:27.109 [2024-07-26 14:33:46.755305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.768246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.768291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:27.109 [2024-07-26 14:33:46.768317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.889 ms 00:28:27.109 [2024-07-26 14:33:46.768329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.775404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.775446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:27.109 [2024-07-26 14:33:46.775464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.030 ms 00:28:27.109 [2024-07-26 14:33:46.775475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.775577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.775604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:27.109 [2024-07-26 14:33:46.775617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:28:27.109 [2024-07-26 14:33:46.775633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.788290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.788333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:27.109 [2024-07-26 14:33:46.788351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.605 ms 00:28:27.109 [2024-07-26 14:33:46.788362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.801213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.801249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:27.109 [2024-07-26 14:33:46.801296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.808 ms 00:28:27.109 [2024-07-26 14:33:46.801308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.814120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.814172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:27.109 [2024-07-26 14:33:46.814204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.771 ms 00:28:27.109 [2024-07-26 14:33:46.814214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.826459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.826497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:27.109 [2024-07-26 14:33:46.826528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.139 ms 00:28:27.109 [2024-07-26 14:33:46.826538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.826578] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:27.109 [2024-07-26 14:33:46.826613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.109 [2024-07-26 14:33:46.826642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:27.109 [2024-07-26 14:33:46.826653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:27.109 [2024-07-26 14:33:46.826679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:27.109 [2024-07-26 14:33:46.826860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:27.109 [2024-07-26 14:33:46.826871] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 9de8945d-d487-4658-a5e5-cd03fde449c9 00:28:27.109 [2024-07-26 14:33:46.826882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:27.109 [2024-07-26 14:33:46.826891] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:27.109 [2024-07-26 14:33:46.826900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:27.109 [2024-07-26 14:33:46.826910] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:27.109 [2024-07-26 14:33:46.826919] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:27.109 [2024-07-26 14:33:46.826963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:27.109 [2024-07-26 14:33:46.826980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:27.109 [2024-07-26 14:33:46.826990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:27.109 [2024-07-26 14:33:46.826999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:27.109 [2024-07-26 14:33:46.827009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.827020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:27.109 [2024-07-26 14:33:46.827031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:28:27.109 [2024-07-26 14:33:46.827043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.841835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.841871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:27.109 [2024-07-26 14:33:46.841903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.752 ms 00:28:27.109 [2024-07-26 14:33:46.841954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.109 [2024-07-26 14:33:46.842430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:27.109 [2024-07-26 14:33:46.842466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:27.110 [2024-07-26 14:33:46.842481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.449 ms 00:28:27.110 [2024-07-26 14:33:46.842492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.368 [2024-07-26 14:33:46.892497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.368 [2024-07-26 14:33:46.892774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:27.368 [2024-07-26 14:33:46.892912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.368 [2024-07-26 14:33:46.893067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.368 [2024-07-26 14:33:46.893169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.368 [2024-07-26 14:33:46.893319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:27.368 [2024-07-26 14:33:46.893439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.368 [2024-07-26 14:33:46.893488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.368 [2024-07-26 14:33:46.893730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.368 [2024-07-26 14:33:46.893862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:27.368 [2024-07-26 14:33:46.894026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:46.894090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:46.894228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:46.894283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:27.369 [2024-07-26 14:33:46.894339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:46.894443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:46.986775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:46.987036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:27.369 [2024-07-26 14:33:46.987161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:46.987239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.059760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.060089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:27.369 [2024-07-26 14:33:47.060216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.060270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.060533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.060586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:27.369 [2024-07-26 14:33:47.060627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.060734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.060830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.060885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:27.369 [2024-07-26 14:33:47.060922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.061039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.061196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.061338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:27.369 [2024-07-26 14:33:47.061364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.061376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.061451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.061476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:27.369 [2024-07-26 14:33:47.061488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.061498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.061542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.061556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:27.369 [2024-07-26 14:33:47.061567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.061577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.061655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:27.369 [2024-07-26 14:33:47.061677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:27.369 [2024-07-26 14:33:47.061688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:27.369 [2024-07-26 14:33:47.061698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:27.369 [2024-07-26 14:33:47.061827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 313.295 ms, result 0 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.746 Remove shared memory files 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84845 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:28.746 ************************************ 00:28:28.746 END TEST ftl_upgrade_shutdown 00:28:28.746 ************************************ 00:28:28.746 00:28:28.746 real 1m29.524s 00:28:28.746 user 2m8.587s 00:28:28.746 sys 0m21.562s 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:28.746 14:33:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.746 14:33:48 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:28:28.746 14:33:48 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:28.746 14:33:48 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:28:28.746 14:33:48 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:28.746 14:33:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:28.746 ************************************ 00:28:28.746 START TEST ftl_restore_fast 00:28:28.746 ************************************ 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:28:28.746 * Looking for test storage... 00:28:28.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.DcsofR1mjq 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=85319 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 85319 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 85319 ']' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:28.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:28.746 14:33:48 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:28.746 [2024-07-26 14:33:48.377197] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:28.746 [2024-07-26 14:33:48.377366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85319 ] 00:28:29.005 [2024-07-26 14:33:48.532589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.005 [2024-07-26 14:33:48.681741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:28:29.572 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:29.845 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:30.150 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:30.150 { 00:28:30.150 "name": "nvme0n1", 00:28:30.150 "aliases": [ 00:28:30.150 "be89a202-ed73-4fe6-8346-2706781234a8" 00:28:30.150 ], 00:28:30.150 "product_name": "NVMe disk", 00:28:30.150 "block_size": 4096, 00:28:30.150 "num_blocks": 1310720, 00:28:30.150 "uuid": "be89a202-ed73-4fe6-8346-2706781234a8", 00:28:30.150 "assigned_rate_limits": { 00:28:30.150 "rw_ios_per_sec": 0, 00:28:30.150 "rw_mbytes_per_sec": 0, 00:28:30.150 "r_mbytes_per_sec": 0, 00:28:30.150 "w_mbytes_per_sec": 0 00:28:30.150 }, 00:28:30.150 "claimed": true, 00:28:30.150 "claim_type": "read_many_write_one", 00:28:30.150 "zoned": false, 00:28:30.150 "supported_io_types": { 00:28:30.150 "read": true, 00:28:30.150 "write": true, 00:28:30.150 "unmap": true, 00:28:30.150 "flush": true, 00:28:30.150 "reset": true, 00:28:30.150 "nvme_admin": true, 00:28:30.150 "nvme_io": true, 00:28:30.150 "nvme_io_md": false, 00:28:30.150 "write_zeroes": true, 00:28:30.150 "zcopy": false, 00:28:30.150 "get_zone_info": false, 00:28:30.150 "zone_management": false, 00:28:30.150 "zone_append": false, 00:28:30.150 "compare": true, 00:28:30.150 "compare_and_write": false, 00:28:30.150 "abort": true, 00:28:30.150 "seek_hole": false, 00:28:30.150 "seek_data": false, 00:28:30.151 "copy": true, 00:28:30.151 "nvme_iov_md": false 00:28:30.151 }, 00:28:30.151 "driver_specific": { 00:28:30.151 "nvme": [ 00:28:30.151 { 00:28:30.151 "pci_address": "0000:00:11.0", 00:28:30.151 "trid": { 00:28:30.151 "trtype": "PCIe", 00:28:30.151 "traddr": "0000:00:11.0" 00:28:30.151 }, 00:28:30.151 "ctrlr_data": { 00:28:30.151 "cntlid": 0, 00:28:30.151 "vendor_id": "0x1b36", 00:28:30.151 "model_number": "QEMU NVMe Ctrl", 00:28:30.151 "serial_number": "12341", 00:28:30.151 "firmware_revision": "8.0.0", 00:28:30.151 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:30.151 "oacs": { 00:28:30.151 "security": 0, 00:28:30.151 "format": 1, 00:28:30.151 "firmware": 0, 00:28:30.151 "ns_manage": 1 00:28:30.151 }, 00:28:30.151 "multi_ctrlr": false, 00:28:30.151 "ana_reporting": false 00:28:30.151 }, 00:28:30.151 "vs": { 00:28:30.151 "nvme_version": "1.4" 00:28:30.151 }, 00:28:30.151 "ns_data": { 00:28:30.151 "id": 1, 00:28:30.151 "can_share": false 00:28:30.151 } 00:28:30.151 } 00:28:30.151 ], 00:28:30.151 "mp_policy": "active_passive" 00:28:30.151 } 00:28:30.151 } 00:28:30.151 ]' 00:28:30.152 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:30.152 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:30.152 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:30.420 14:33:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:30.420 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=b72a5aed-b814-4c42-af09-0f12f9fed4c4 00:28:30.420 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:28:30.420 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b72a5aed-b814-4c42-af09-0f12f9fed4c4 00:28:30.679 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:30.936 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=c95932c0-67d7-4e25-9d17-5b6082c6043a 00:28:30.937 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c95932c0-67d7-4e25-9d17-5b6082c6043a 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:31.195 14:33:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.454 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:31.455 { 00:28:31.455 "name": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:31.455 "aliases": [ 00:28:31.455 "lvs/nvme0n1p0" 00:28:31.455 ], 00:28:31.455 "product_name": "Logical Volume", 00:28:31.455 "block_size": 4096, 00:28:31.455 "num_blocks": 26476544, 00:28:31.455 "uuid": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:31.455 "assigned_rate_limits": { 00:28:31.455 "rw_ios_per_sec": 0, 00:28:31.455 "rw_mbytes_per_sec": 0, 00:28:31.455 "r_mbytes_per_sec": 0, 00:28:31.455 "w_mbytes_per_sec": 0 00:28:31.455 }, 00:28:31.455 "claimed": false, 00:28:31.455 "zoned": false, 00:28:31.455 "supported_io_types": { 00:28:31.455 "read": true, 00:28:31.455 "write": true, 00:28:31.455 "unmap": true, 00:28:31.455 "flush": false, 00:28:31.455 "reset": true, 00:28:31.455 "nvme_admin": false, 00:28:31.455 "nvme_io": false, 00:28:31.455 "nvme_io_md": false, 00:28:31.455 "write_zeroes": true, 00:28:31.455 "zcopy": false, 00:28:31.455 "get_zone_info": false, 00:28:31.455 "zone_management": false, 00:28:31.455 "zone_append": false, 00:28:31.455 "compare": false, 00:28:31.455 "compare_and_write": false, 00:28:31.455 "abort": false, 00:28:31.455 "seek_hole": true, 00:28:31.455 "seek_data": true, 00:28:31.455 "copy": false, 00:28:31.455 "nvme_iov_md": false 00:28:31.455 }, 00:28:31.455 "driver_specific": { 00:28:31.455 "lvol": { 00:28:31.455 "lvol_store_uuid": "c95932c0-67d7-4e25-9d17-5b6082c6043a", 00:28:31.455 "base_bdev": "nvme0n1", 00:28:31.455 "thin_provision": true, 00:28:31.455 "num_allocated_clusters": 0, 00:28:31.455 "snapshot": false, 00:28:31.455 "clone": false, 00:28:31.455 "esnap_clone": false 00:28:31.455 } 00:28:31.455 } 00:28:31.455 } 00:28:31.455 ]' 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:28:31.455 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:31.714 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:31.974 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:31.974 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:31.974 { 00:28:31.974 "name": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:31.974 "aliases": [ 00:28:31.974 "lvs/nvme0n1p0" 00:28:31.974 ], 00:28:31.974 "product_name": "Logical Volume", 00:28:31.974 "block_size": 4096, 00:28:31.974 "num_blocks": 26476544, 00:28:31.974 "uuid": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:31.974 "assigned_rate_limits": { 00:28:31.974 "rw_ios_per_sec": 0, 00:28:31.974 "rw_mbytes_per_sec": 0, 00:28:31.974 "r_mbytes_per_sec": 0, 00:28:31.974 "w_mbytes_per_sec": 0 00:28:31.974 }, 00:28:31.974 "claimed": false, 00:28:31.974 "zoned": false, 00:28:31.974 "supported_io_types": { 00:28:31.974 "read": true, 00:28:31.974 "write": true, 00:28:31.974 "unmap": true, 00:28:31.974 "flush": false, 00:28:31.974 "reset": true, 00:28:31.974 "nvme_admin": false, 00:28:31.974 "nvme_io": false, 00:28:31.974 "nvme_io_md": false, 00:28:31.974 "write_zeroes": true, 00:28:31.974 "zcopy": false, 00:28:31.974 "get_zone_info": false, 00:28:31.974 "zone_management": false, 00:28:31.974 "zone_append": false, 00:28:31.974 "compare": false, 00:28:31.974 "compare_and_write": false, 00:28:31.974 "abort": false, 00:28:31.974 "seek_hole": true, 00:28:31.974 "seek_data": true, 00:28:31.974 "copy": false, 00:28:31.974 "nvme_iov_md": false 00:28:31.974 }, 00:28:31.974 "driver_specific": { 00:28:31.974 "lvol": { 00:28:31.974 "lvol_store_uuid": "c95932c0-67d7-4e25-9d17-5b6082c6043a", 00:28:31.974 "base_bdev": "nvme0n1", 00:28:31.974 "thin_provision": true, 00:28:31.974 "num_allocated_clusters": 0, 00:28:31.974 "snapshot": false, 00:28:31.974 "clone": false, 00:28:31.974 "esnap_clone": false 00:28:31.974 } 00:28:31.974 } 00:28:31.974 } 00:28:31.974 ]' 00:28:31.974 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:31.974 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:31.974 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:28:32.233 14:33:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b 00:28:32.493 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:32.493 { 00:28:32.493 "name": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:32.493 "aliases": [ 00:28:32.493 "lvs/nvme0n1p0" 00:28:32.493 ], 00:28:32.493 "product_name": "Logical Volume", 00:28:32.493 "block_size": 4096, 00:28:32.493 "num_blocks": 26476544, 00:28:32.493 "uuid": "860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b", 00:28:32.493 "assigned_rate_limits": { 00:28:32.493 "rw_ios_per_sec": 0, 00:28:32.493 "rw_mbytes_per_sec": 0, 00:28:32.493 "r_mbytes_per_sec": 0, 00:28:32.493 "w_mbytes_per_sec": 0 00:28:32.493 }, 00:28:32.493 "claimed": false, 00:28:32.493 "zoned": false, 00:28:32.493 "supported_io_types": { 00:28:32.493 "read": true, 00:28:32.493 "write": true, 00:28:32.493 "unmap": true, 00:28:32.493 "flush": false, 00:28:32.493 "reset": true, 00:28:32.493 "nvme_admin": false, 00:28:32.493 "nvme_io": false, 00:28:32.493 "nvme_io_md": false, 00:28:32.493 "write_zeroes": true, 00:28:32.493 "zcopy": false, 00:28:32.493 "get_zone_info": false, 00:28:32.493 "zone_management": false, 00:28:32.493 "zone_append": false, 00:28:32.493 "compare": false, 00:28:32.493 "compare_and_write": false, 00:28:32.493 "abort": false, 00:28:32.493 "seek_hole": true, 00:28:32.493 "seek_data": true, 00:28:32.493 "copy": false, 00:28:32.493 "nvme_iov_md": false 00:28:32.493 }, 00:28:32.493 "driver_specific": { 00:28:32.493 "lvol": { 00:28:32.493 "lvol_store_uuid": "c95932c0-67d7-4e25-9d17-5b6082c6043a", 00:28:32.493 "base_bdev": "nvme0n1", 00:28:32.493 "thin_provision": true, 00:28:32.493 "num_allocated_clusters": 0, 00:28:32.493 "snapshot": false, 00:28:32.493 "clone": false, 00:28:32.493 "esnap_clone": false 00:28:32.493 } 00:28:32.493 } 00:28:32.493 } 00:28:32.493 ]' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b --l2p_dram_limit 10' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:28:32.752 14:33:52 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 860e8ef7-4b1e-4701-b1c7-2d08e7b77a0b --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:28:33.012 [2024-07-26 14:33:52.534477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.534558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:33.012 [2024-07-26 14:33:52.534579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:33.012 [2024-07-26 14:33:52.534592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.534667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.534686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:33.012 [2024-07-26 14:33:52.534698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:33.012 [2024-07-26 14:33:52.534710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.534736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:33.012 [2024-07-26 14:33:52.535731] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:33.012 [2024-07-26 14:33:52.535778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.535796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:33.012 [2024-07-26 14:33:52.535808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:28:33.012 [2024-07-26 14:33:52.535820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.535947] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:28:33.012 [2024-07-26 14:33:52.537122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.537165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:33.012 [2024-07-26 14:33:52.537184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:33.012 [2024-07-26 14:33:52.537196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.541824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.541865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:33.012 [2024-07-26 14:33:52.541901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:28:33.012 [2024-07-26 14:33:52.541921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.542035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.542053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:33.012 [2024-07-26 14:33:52.542067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:33.012 [2024-07-26 14:33:52.542077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.542157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.542174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:33.012 [2024-07-26 14:33:52.542191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:33.012 [2024-07-26 14:33:52.542201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.542234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:33.012 [2024-07-26 14:33:52.546313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.546371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:33.012 [2024-07-26 14:33:52.546387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.089 ms 00:28:33.012 [2024-07-26 14:33:52.546399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.546439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.012 [2024-07-26 14:33:52.546457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:33.012 [2024-07-26 14:33:52.546469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:33.012 [2024-07-26 14:33:52.546480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.012 [2024-07-26 14:33:52.546531] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:33.012 [2024-07-26 14:33:52.546676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:33.012 [2024-07-26 14:33:52.546693] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:33.012 [2024-07-26 14:33:52.546711] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:33.012 [2024-07-26 14:33:52.546725] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:33.012 [2024-07-26 14:33:52.546739] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:33.013 [2024-07-26 14:33:52.546750] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:33.013 [2024-07-26 14:33:52.546766] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:33.013 [2024-07-26 14:33:52.546776] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:33.013 [2024-07-26 14:33:52.546787] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:33.013 [2024-07-26 14:33:52.546797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.013 [2024-07-26 14:33:52.546809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:33.013 [2024-07-26 14:33:52.546820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:28:33.013 [2024-07-26 14:33:52.546831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.013 [2024-07-26 14:33:52.546929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.013 [2024-07-26 14:33:52.546966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:33.013 [2024-07-26 14:33:52.546978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:28:33.013 [2024-07-26 14:33:52.546992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.013 [2024-07-26 14:33:52.547106] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:33.013 [2024-07-26 14:33:52.547129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:33.013 [2024-07-26 14:33:52.547152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547166] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:33.013 [2024-07-26 14:33:52.547189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547199] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547210] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:33.013 [2024-07-26 14:33:52.547220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547231] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.013 [2024-07-26 14:33:52.547240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:33.013 [2024-07-26 14:33:52.547254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:33.013 [2024-07-26 14:33:52.547263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:33.013 [2024-07-26 14:33:52.547291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:33.013 [2024-07-26 14:33:52.547318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:33.013 [2024-07-26 14:33:52.547344] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:33.013 [2024-07-26 14:33:52.547378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547404] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547416] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:33.013 [2024-07-26 14:33:52.547427] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:33.013 [2024-07-26 14:33:52.547469] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547481] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:33.013 [2024-07-26 14:33:52.547510] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:33.013 [2024-07-26 14:33:52.547545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547567] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:33.013 [2024-07-26 14:33:52.547577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547591] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.013 [2024-07-26 14:33:52.547602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:33.013 [2024-07-26 14:33:52.547614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:33.013 [2024-07-26 14:33:52.547624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:33.013 [2024-07-26 14:33:52.547637] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:33.013 [2024-07-26 14:33:52.547648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:33.013 [2024-07-26 14:33:52.547659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:33.013 [2024-07-26 14:33:52.547681] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:33.013 [2024-07-26 14:33:52.547691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547703] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:33.013 [2024-07-26 14:33:52.547714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:33.013 [2024-07-26 14:33:52.547727] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:33.013 [2024-07-26 14:33:52.547751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:33.013 [2024-07-26 14:33:52.547762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:33.013 [2024-07-26 14:33:52.547776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:33.013 [2024-07-26 14:33:52.547787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:33.013 [2024-07-26 14:33:52.547799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:33.013 [2024-07-26 14:33:52.547809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:33.013 [2024-07-26 14:33:52.547825] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:33.013 [2024-07-26 14:33:52.547841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.547856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:33.013 [2024-07-26 14:33:52.547868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:33.013 [2024-07-26 14:33:52.547881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:33.013 [2024-07-26 14:33:52.547907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:33.013 [2024-07-26 14:33:52.547925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:33.013 [2024-07-26 14:33:52.547936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:33.013 [2024-07-26 14:33:52.547950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:33.013 [2024-07-26 14:33:52.547962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:33.013 [2024-07-26 14:33:52.547974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:33.013 [2024-07-26 14:33:52.547986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:33.013 [2024-07-26 14:33:52.548080] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:33.013 [2024-07-26 14:33:52.548093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:33.013 [2024-07-26 14:33:52.548121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:33.013 [2024-07-26 14:33:52.548135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:33.013 [2024-07-26 14:33:52.548147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:33.013 [2024-07-26 14:33:52.548162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:33.013 [2024-07-26 14:33:52.548174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:33.013 [2024-07-26 14:33:52.548188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:28:33.013 [2024-07-26 14:33:52.548200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:33.013 [2024-07-26 14:33:52.548259] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:33.013 [2024-07-26 14:33:52.548278] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:34.914 [2024-07-26 14:33:54.453120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.453183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:34.914 [2024-07-26 14:33:54.453223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1904.868 ms 00:28:34.914 [2024-07-26 14:33:54.453234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.914 [2024-07-26 14:33:54.481784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.481840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:34.914 [2024-07-26 14:33:54.481879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.324 ms 00:28:34.914 [2024-07-26 14:33:54.481890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.914 [2024-07-26 14:33:54.482113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.482134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:34.914 [2024-07-26 14:33:54.482152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:34.914 [2024-07-26 14:33:54.482163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.914 [2024-07-26 14:33:54.515554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.515600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:34.914 [2024-07-26 14:33:54.515635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.303 ms 00:28:34.914 [2024-07-26 14:33:54.515646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.914 [2024-07-26 14:33:54.515699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.515715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:34.914 [2024-07-26 14:33:54.515733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:34.914 [2024-07-26 14:33:54.515743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.914 [2024-07-26 14:33:54.516185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.914 [2024-07-26 14:33:54.516204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:34.914 [2024-07-26 14:33:54.516219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:28:34.915 [2024-07-26 14:33:54.516231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.516402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.516423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:34.915 [2024-07-26 14:33:54.516438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:28:34.915 [2024-07-26 14:33:54.516449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.532361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.532402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:34.915 [2024-07-26 14:33:54.532451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.867 ms 00:28:34.915 [2024-07-26 14:33:54.532462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.544159] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:34.915 [2024-07-26 14:33:54.546796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.546848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:34.915 [2024-07-26 14:33:54.546864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.243 ms 00:28:34.915 [2024-07-26 14:33:54.546876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.630646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.630737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:34.915 [2024-07-26 14:33:54.630765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.735 ms 00:28:34.915 [2024-07-26 14:33:54.630778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.631014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.631056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:34.915 [2024-07-26 14:33:54.631069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:28:34.915 [2024-07-26 14:33:54.631085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:34.915 [2024-07-26 14:33:54.658941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:34.915 [2024-07-26 14:33:54.659001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:34.915 [2024-07-26 14:33:54.659019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.792 ms 00:28:34.915 [2024-07-26 14:33:54.659035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.686224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.686284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:35.174 [2024-07-26 14:33:54.686301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.144 ms 00:28:35.174 [2024-07-26 14:33:54.686313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.686948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.686992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.174 [2024-07-26 14:33:54.687008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:28:35.174 [2024-07-26 14:33:54.687021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.767486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.767570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:35.174 [2024-07-26 14:33:54.767590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.386 ms 00:28:35.174 [2024-07-26 14:33:54.767607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.795003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.795064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:35.174 [2024-07-26 14:33:54.795081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.347 ms 00:28:35.174 [2024-07-26 14:33:54.795093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.827056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.827156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:35.174 [2024-07-26 14:33:54.827175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.916 ms 00:28:35.174 [2024-07-26 14:33:54.827187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.856346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.856420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.174 [2024-07-26 14:33:54.856438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.091 ms 00:28:35.174 [2024-07-26 14:33:54.856450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.856515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.856537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.174 [2024-07-26 14:33:54.856550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:35.174 [2024-07-26 14:33:54.856565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.856683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.174 [2024-07-26 14:33:54.856708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.174 [2024-07-26 14:33:54.856720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:35.174 [2024-07-26 14:33:54.856732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.174 [2024-07-26 14:33:54.857927] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2322.878 ms, result 0 00:28:35.174 { 00:28:35.174 "name": "ftl0", 00:28:35.174 "uuid": "6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b" 00:28:35.174 } 00:28:35.174 14:33:54 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:28:35.174 14:33:54 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:35.432 14:33:55 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:28:35.432 14:33:55 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:35.691 [2024-07-26 14:33:55.341229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.341302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:35.691 [2024-07-26 14:33:55.341341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.691 [2024-07-26 14:33:55.341352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.341388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:35.691 [2024-07-26 14:33:55.344432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.344483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:35.691 [2024-07-26 14:33:55.344498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.021 ms 00:28:35.691 [2024-07-26 14:33:55.344510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.344767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.344790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:35.691 [2024-07-26 14:33:55.344812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:28:35.691 [2024-07-26 14:33:55.344825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.347668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.347699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:35.691 [2024-07-26 14:33:55.347729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:28:35.691 [2024-07-26 14:33:55.347740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.353371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.353424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:35.691 [2024-07-26 14:33:55.353439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.609 ms 00:28:35.691 [2024-07-26 14:33:55.353451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.380429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.380494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:35.691 [2024-07-26 14:33:55.380511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.884 ms 00:28:35.691 [2024-07-26 14:33:55.380523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.396830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.396891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:35.691 [2024-07-26 14:33:55.396908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.259 ms 00:28:35.691 [2024-07-26 14:33:55.396960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.397160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.397185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:35.691 [2024-07-26 14:33:55.397198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:28:35.691 [2024-07-26 14:33:55.397210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.423817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.423874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:35.691 [2024-07-26 14:33:55.423890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:28:35.691 [2024-07-26 14:33:55.423902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.691 [2024-07-26 14:33:55.449613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.691 [2024-07-26 14:33:55.449686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:35.691 [2024-07-26 14:33:55.449703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.641 ms 00:28:35.691 [2024-07-26 14:33:55.449715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.951 [2024-07-26 14:33:55.475699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.951 [2024-07-26 14:33:55.475759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:35.951 [2024-07-26 14:33:55.475775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.940 ms 00:28:35.951 [2024-07-26 14:33:55.475787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.951 [2024-07-26 14:33:55.502159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.951 [2024-07-26 14:33:55.502235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:35.951 [2024-07-26 14:33:55.502254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.285 ms 00:28:35.951 [2024-07-26 14:33:55.502266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.951 [2024-07-26 14:33:55.502315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:35.951 [2024-07-26 14:33:55.502344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:35.951 [2024-07-26 14:33:55.502783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.502998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:35.952 [2024-07-26 14:33:55.503736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:35.952 [2024-07-26 14:33:55.503748] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:28:35.952 [2024-07-26 14:33:55.503761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:35.952 [2024-07-26 14:33:55.503772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:35.952 [2024-07-26 14:33:55.503786] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:35.952 [2024-07-26 14:33:55.503798] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:35.952 [2024-07-26 14:33:55.503810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:35.952 [2024-07-26 14:33:55.503822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:35.952 [2024-07-26 14:33:55.503835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:35.952 [2024-07-26 14:33:55.503845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:35.952 [2024-07-26 14:33:55.503856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:35.952 [2024-07-26 14:33:55.503868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.952 [2024-07-26 14:33:55.503881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:35.952 [2024-07-26 14:33:55.503893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.555 ms 00:28:35.952 [2024-07-26 14:33:55.503909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.952 [2024-07-26 14:33:55.518601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.952 [2024-07-26 14:33:55.518657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:35.952 [2024-07-26 14:33:55.518690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.587 ms 00:28:35.952 [2024-07-26 14:33:55.518702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.952 [2024-07-26 14:33:55.519128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.952 [2024-07-26 14:33:55.519160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:35.952 [2024-07-26 14:33:55.519180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:28:35.952 [2024-07-26 14:33:55.519192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.952 [2024-07-26 14:33:55.564580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.952 [2024-07-26 14:33:55.564648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.952 [2024-07-26 14:33:55.564665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.952 [2024-07-26 14:33:55.564678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.952 [2024-07-26 14:33:55.564745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.952 [2024-07-26 14:33:55.564763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.952 [2024-07-26 14:33:55.564778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.952 [2024-07-26 14:33:55.564790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.952 [2024-07-26 14:33:55.564892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.952 [2024-07-26 14:33:55.564965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.953 [2024-07-26 14:33:55.564982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.953 [2024-07-26 14:33:55.564996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.953 [2024-07-26 14:33:55.565023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.953 [2024-07-26 14:33:55.565042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.953 [2024-07-26 14:33:55.565071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.953 [2024-07-26 14:33:55.565086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.953 [2024-07-26 14:33:55.654459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:35.953 [2024-07-26 14:33:55.654536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.953 [2024-07-26 14:33:55.654555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:35.953 [2024-07-26 14:33:55.654568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.730924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.212 [2024-07-26 14:33:55.731048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.212 [2024-07-26 14:33:55.731225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.212 [2024-07-26 14:33:55.731352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.212 [2024-07-26 14:33:55.731521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:36.212 [2024-07-26 14:33:55.731616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.212 [2024-07-26 14:33:55.731720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.731783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:36.212 [2024-07-26 14:33:55.731805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.212 [2024-07-26 14:33:55.731816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:36.212 [2024-07-26 14:33:55.731828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.212 [2024-07-26 14:33:55.732009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.708 ms, result 0 00:28:36.212 true 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 85319 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 85319 ']' 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 85319 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85319 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85319' 00:28:36.212 killing process with pid 85319 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 85319 00:28:36.212 14:33:55 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 85319 00:28:41.478 14:34:00 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:28:44.765 262144+0 records in 00:28:44.765 262144+0 records out 00:28:44.765 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.06745 s, 264 MB/s 00:28:44.765 14:34:04 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:46.676 14:34:06 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:46.945 [2024-07-26 14:34:06.438213] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:46.945 [2024-07-26 14:34:06.438377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:28:46.945 [2024-07-26 14:34:06.611843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.203 [2024-07-26 14:34:06.821979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.462 [2024-07-26 14:34:07.099290] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.462 [2024-07-26 14:34:07.099394] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:47.722 [2024-07-26 14:34:07.256747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.256799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:47.722 [2024-07-26 14:34:07.256834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:47.722 [2024-07-26 14:34:07.256845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.256920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.256961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:47.722 [2024-07-26 14:34:07.256974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:47.722 [2024-07-26 14:34:07.256988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.257038] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:47.722 [2024-07-26 14:34:07.258058] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:47.722 [2024-07-26 14:34:07.258101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.258117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:47.722 [2024-07-26 14:34:07.258129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.089 ms 00:28:47.722 [2024-07-26 14:34:07.258140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.259303] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:47.722 [2024-07-26 14:34:07.275058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.275103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:47.722 [2024-07-26 14:34:07.275137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.755 ms 00:28:47.722 [2024-07-26 14:34:07.275148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.275257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.275280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:47.722 [2024-07-26 14:34:07.275308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:47.722 [2024-07-26 14:34:07.275318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.279863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.279932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:47.722 [2024-07-26 14:34:07.279950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.418 ms 00:28:47.722 [2024-07-26 14:34:07.279960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.280103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.280125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:47.722 [2024-07-26 14:34:07.280138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:28:47.722 [2024-07-26 14:34:07.280149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.280222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.280241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:47.722 [2024-07-26 14:34:07.280254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:47.722 [2024-07-26 14:34:07.280265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.280300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:47.722 [2024-07-26 14:34:07.284301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.284342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:47.722 [2024-07-26 14:34:07.284374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.010 ms 00:28:47.722 [2024-07-26 14:34:07.284400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.284472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.284487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:47.722 [2024-07-26 14:34:07.284499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:47.722 [2024-07-26 14:34:07.284508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.284567] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:47.722 [2024-07-26 14:34:07.284596] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:47.722 [2024-07-26 14:34:07.284634] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:47.722 [2024-07-26 14:34:07.284655] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:47.722 [2024-07-26 14:34:07.284755] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:47.722 [2024-07-26 14:34:07.284770] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:47.722 [2024-07-26 14:34:07.284783] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:47.722 [2024-07-26 14:34:07.284796] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:47.722 [2024-07-26 14:34:07.284807] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:47.722 [2024-07-26 14:34:07.284818] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:47.722 [2024-07-26 14:34:07.284828] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:47.722 [2024-07-26 14:34:07.284837] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:47.722 [2024-07-26 14:34:07.284847] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:47.722 [2024-07-26 14:34:07.284857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.284871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:47.722 [2024-07-26 14:34:07.284882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:28:47.722 [2024-07-26 14:34:07.284891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.285016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.722 [2024-07-26 14:34:07.285033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:47.722 [2024-07-26 14:34:07.285045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:47.722 [2024-07-26 14:34:07.285054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.722 [2024-07-26 14:34:07.285149] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:47.722 [2024-07-26 14:34:07.285166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:47.722 [2024-07-26 14:34:07.285183] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.722 [2024-07-26 14:34:07.285194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:47.722 [2024-07-26 14:34:07.285213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:47.722 [2024-07-26 14:34:07.285234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:47.722 [2024-07-26 14:34:07.285243] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285253] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.722 [2024-07-26 14:34:07.285262] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:47.722 [2024-07-26 14:34:07.285272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:47.722 [2024-07-26 14:34:07.285296] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:47.722 [2024-07-26 14:34:07.285305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:47.722 [2024-07-26 14:34:07.285317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:47.722 [2024-07-26 14:34:07.285326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:47.722 [2024-07-26 14:34:07.285344] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:47.722 [2024-07-26 14:34:07.285353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:47.722 [2024-07-26 14:34:07.285383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:47.722 [2024-07-26 14:34:07.285392] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.722 [2024-07-26 14:34:07.285402] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:47.723 [2024-07-26 14:34:07.285411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.723 [2024-07-26 14:34:07.285429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:47.723 [2024-07-26 14:34:07.285438] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.723 [2024-07-26 14:34:07.285456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:47.723 [2024-07-26 14:34:07.285465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285474] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:47.723 [2024-07-26 14:34:07.285483] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:47.723 [2024-07-26 14:34:07.285492] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285501] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.723 [2024-07-26 14:34:07.285510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:47.723 [2024-07-26 14:34:07.285519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:47.723 [2024-07-26 14:34:07.285528] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:47.723 [2024-07-26 14:34:07.285537] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:47.723 [2024-07-26 14:34:07.285546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:47.723 [2024-07-26 14:34:07.285555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:47.723 [2024-07-26 14:34:07.285573] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:47.723 [2024-07-26 14:34:07.285583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285591] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:47.723 [2024-07-26 14:34:07.285602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:47.723 [2024-07-26 14:34:07.285611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:47.723 [2024-07-26 14:34:07.285622] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:47.723 [2024-07-26 14:34:07.285632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:47.723 [2024-07-26 14:34:07.285642] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:47.723 [2024-07-26 14:34:07.285651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:47.723 [2024-07-26 14:34:07.285660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:47.723 [2024-07-26 14:34:07.285669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:47.723 [2024-07-26 14:34:07.285678] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:47.723 [2024-07-26 14:34:07.285689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:47.723 [2024-07-26 14:34:07.285701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:47.723 [2024-07-26 14:34:07.285723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:47.723 [2024-07-26 14:34:07.285733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:47.723 [2024-07-26 14:34:07.285743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:47.723 [2024-07-26 14:34:07.285753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:47.723 [2024-07-26 14:34:07.285763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:47.723 [2024-07-26 14:34:07.285773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:47.723 [2024-07-26 14:34:07.285783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:47.723 [2024-07-26 14:34:07.285793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:47.723 [2024-07-26 14:34:07.285803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:47.723 [2024-07-26 14:34:07.285853] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:47.723 [2024-07-26 14:34:07.285864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285880] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:47.723 [2024-07-26 14:34:07.285890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:47.723 [2024-07-26 14:34:07.285900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:47.723 [2024-07-26 14:34:07.285926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:47.723 [2024-07-26 14:34:07.285937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.286369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:47.723 [2024-07-26 14:34:07.286413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:28:47.723 [2024-07-26 14:34:07.286450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.322146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.322203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:47.723 [2024-07-26 14:34:07.322239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.574 ms 00:28:47.723 [2024-07-26 14:34:07.322249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.322361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.322377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:47.723 [2024-07-26 14:34:07.322389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:47.723 [2024-07-26 14:34:07.322399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.356983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.357218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:47.723 [2024-07-26 14:34:07.357354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.495 ms 00:28:47.723 [2024-07-26 14:34:07.357405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.357566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.357621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:47.723 [2024-07-26 14:34:07.357729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:47.723 [2024-07-26 14:34:07.357786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.358228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.358399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:47.723 [2024-07-26 14:34:07.358425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:47.723 [2024-07-26 14:34:07.358437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.358589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.358608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:47.723 [2024-07-26 14:34:07.358621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:47.723 [2024-07-26 14:34:07.358631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.373545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.373586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:47.723 [2024-07-26 14:34:07.373634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.883 ms 00:28:47.723 [2024-07-26 14:34:07.373649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.388699] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:47.723 [2024-07-26 14:34:07.388743] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:47.723 [2024-07-26 14:34:07.388776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.388787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:47.723 [2024-07-26 14:34:07.388798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.008 ms 00:28:47.723 [2024-07-26 14:34:07.388808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.415597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.415670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:47.723 [2024-07-26 14:34:07.415710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.745 ms 00:28:47.723 [2024-07-26 14:34:07.415736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.723 [2024-07-26 14:34:07.429901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.723 [2024-07-26 14:34:07.429968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:47.723 [2024-07-26 14:34:07.430017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.104 ms 00:28:47.724 [2024-07-26 14:34:07.430042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.724 [2024-07-26 14:34:07.444675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.724 [2024-07-26 14:34:07.444714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:47.724 [2024-07-26 14:34:07.444746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.590 ms 00:28:47.724 [2024-07-26 14:34:07.444756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.724 [2024-07-26 14:34:07.445634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.724 [2024-07-26 14:34:07.445695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:47.724 [2024-07-26 14:34:07.445709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:28:47.724 [2024-07-26 14:34:07.445719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.511617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.511675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:47.983 [2024-07-26 14:34:07.511711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.872 ms 00:28:47.983 [2024-07-26 14:34:07.511723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.524648] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:47.983 [2024-07-26 14:34:07.527264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.527337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:47.983 [2024-07-26 14:34:07.527382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.467 ms 00:28:47.983 [2024-07-26 14:34:07.527408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.527520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.527539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:47.983 [2024-07-26 14:34:07.527550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:47.983 [2024-07-26 14:34:07.527560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.527651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.527690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:47.983 [2024-07-26 14:34:07.527702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:47.983 [2024-07-26 14:34:07.527712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.527742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.527755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:47.983 [2024-07-26 14:34:07.527766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:47.983 [2024-07-26 14:34:07.527776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.527811] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:47.983 [2024-07-26 14:34:07.527826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.527836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:47.983 [2024-07-26 14:34:07.527851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:47.983 [2024-07-26 14:34:07.527860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.556757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.556796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:47.983 [2024-07-26 14:34:07.556828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.875 ms 00:28:47.983 [2024-07-26 14:34:07.556838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.556963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:47.983 [2024-07-26 14:34:07.556987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:47.983 [2024-07-26 14:34:07.556999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:47.983 [2024-07-26 14:34:07.557009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:47.983 [2024-07-26 14:34:07.558423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.010 ms, result 0 00:29:31.697  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 92/1024 [MB] (22 MBps) Copying: 115/1024 [MB] (22 MBps) Copying: 138/1024 [MB] (22 MBps) Copying: 161/1024 [MB] (22 MBps) Copying: 185/1024 [MB] (23 MBps) Copying: 209/1024 [MB] (24 MBps) Copying: 233/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (23 MBps) Copying: 279/1024 [MB] (22 MBps) Copying: 303/1024 [MB] (23 MBps) Copying: 327/1024 [MB] (24 MBps) Copying: 351/1024 [MB] (23 MBps) Copying: 375/1024 [MB] (23 MBps) Copying: 399/1024 [MB] (24 MBps) Copying: 423/1024 [MB] (24 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 495/1024 [MB] (23 MBps) Copying: 518/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (24 MBps) Copying: 566/1024 [MB] (23 MBps) Copying: 589/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 659/1024 [MB] (23 MBps) Copying: 682/1024 [MB] (23 MBps) Copying: 706/1024 [MB] (23 MBps) Copying: 729/1024 [MB] (23 MBps) Copying: 751/1024 [MB] (22 MBps) Copying: 774/1024 [MB] (22 MBps) Copying: 797/1024 [MB] (23 MBps) Copying: 820/1024 [MB] (23 MBps) Copying: 844/1024 [MB] (23 MBps) Copying: 867/1024 [MB] (23 MBps) Copying: 890/1024 [MB] (22 MBps) Copying: 913/1024 [MB] (23 MBps) Copying: 937/1024 [MB] (23 MBps) Copying: 960/1024 [MB] (23 MBps) Copying: 984/1024 [MB] (23 MBps) Copying: 1008/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:34:51.230311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.697 [2024-07-26 14:34:51.230529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:31.697 [2024-07-26 14:34:51.230663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:31.697 [2024-07-26 14:34:51.230714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.697 [2024-07-26 14:34:51.230840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:31.697 [2024-07-26 14:34:51.234238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.697 [2024-07-26 14:34:51.234466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:31.697 [2024-07-26 14:34:51.234598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:29:31.697 [2024-07-26 14:34:51.234637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.697 [2024-07-26 14:34:51.236396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.697 [2024-07-26 14:34:51.236469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:31.698 [2024-07-26 14:34:51.236517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.716 ms 00:29:31.698 [2024-07-26 14:34:51.236528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.698 [2024-07-26 14:34:51.236560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.698 [2024-07-26 14:34:51.236576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:29:31.698 [2024-07-26 14:34:51.236588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:31.698 [2024-07-26 14:34:51.236598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.698 [2024-07-26 14:34:51.236651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.698 [2024-07-26 14:34:51.236668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:29:31.698 [2024-07-26 14:34:51.236680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:29:31.698 [2024-07-26 14:34:51.236690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.698 [2024-07-26 14:34:51.236708] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:31.698 [2024-07-26 14:34:51.236724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.236999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:31.698 [2024-07-26 14:34:51.237673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:31.699 [2024-07-26 14:34:51.237917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:31.699 [2024-07-26 14:34:51.237927] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:29:31.699 [2024-07-26 14:34:51.238251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:31.699 [2024-07-26 14:34:51.238309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:29:31.699 [2024-07-26 14:34:51.238350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:31.699 [2024-07-26 14:34:51.238502] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:31.699 [2024-07-26 14:34:51.238558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:31.699 [2024-07-26 14:34:51.238596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:31.699 [2024-07-26 14:34:51.238631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:31.699 [2024-07-26 14:34:51.238742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:31.699 [2024-07-26 14:34:51.238786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:31.699 [2024-07-26 14:34:51.238823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.699 [2024-07-26 14:34:51.238859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:31.699 [2024-07-26 14:34:51.238907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:29:31.699 [2024-07-26 14:34:51.238951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.254789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.699 [2024-07-26 14:34:51.254873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:31.699 [2024-07-26 14:34:51.255037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.739 ms 00:29:31.699 [2024-07-26 14:34:51.255099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.255590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:31.699 [2024-07-26 14:34:51.255646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:31.699 [2024-07-26 14:34:51.255858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:29:31.699 [2024-07-26 14:34:51.255932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.290210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.290459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:31.699 [2024-07-26 14:34:51.290583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.290634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.290824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.290888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:31.699 [2024-07-26 14:34:51.291112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.291135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.291243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.291264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:31.699 [2024-07-26 14:34:51.291276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.291293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.291315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.291329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:31.699 [2024-07-26 14:34:51.291340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.291351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.375312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.375371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:31.699 [2024-07-26 14:34:51.375411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.375421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.455802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.455877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:31.699 [2024-07-26 14:34:51.455940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.455971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:31.699 [2024-07-26 14:34:51.456109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:31.699 [2024-07-26 14:34:51.456232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:31.699 [2024-07-26 14:34:51.456371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:31.699 [2024-07-26 14:34:51.456455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:31.699 [2024-07-26 14:34:51.456536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:31.699 [2024-07-26 14:34:51.456632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:31.699 [2024-07-26 14:34:51.456644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:31.699 [2024-07-26 14:34:51.456655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:31.699 [2024-07-26 14:34:51.456795] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 226.448 ms, result 0 00:29:33.087 00:29:33.087 00:29:33.087 14:34:52 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:33.087 [2024-07-26 14:34:52.704892] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:33.087 [2024-07-26 14:34:52.705082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85973 ] 00:29:33.347 [2024-07-26 14:34:52.876402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.347 [2024-07-26 14:34:53.033737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.616 [2024-07-26 14:34:53.306348] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.616 [2024-07-26 14:34:53.306449] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.881 [2024-07-26 14:34:53.464355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.464440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:33.881 [2024-07-26 14:34:53.464476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.881 [2024-07-26 14:34:53.464486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.464561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.464577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.881 [2024-07-26 14:34:53.464588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:33.881 [2024-07-26 14:34:53.464601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.464633] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:33.881 [2024-07-26 14:34:53.465540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:33.881 [2024-07-26 14:34:53.465619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.465633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.881 [2024-07-26 14:34:53.465645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.996 ms 00:29:33.881 [2024-07-26 14:34:53.465655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466094] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:29:33.881 [2024-07-26 14:34:53.466121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.466134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:33.881 [2024-07-26 14:34:53.466152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:33.881 [2024-07-26 14:34:53.466163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.466234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:33.881 [2024-07-26 14:34:53.466245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:33.881 [2024-07-26 14:34:53.466256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.466676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.881 [2024-07-26 14:34:53.466690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:29:33.881 [2024-07-26 14:34:53.466700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.466787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.881 [2024-07-26 14:34:53.466797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:29:33.881 [2024-07-26 14:34:53.466807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.466851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:33.881 [2024-07-26 14:34:53.466862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:33.881 [2024-07-26 14:34:53.466871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.466915] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:33.881 [2024-07-26 14:34:53.471318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.471374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.881 [2024-07-26 14:34:53.471407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.423 ms 00:29:33.881 [2024-07-26 14:34:53.471418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.471475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.471490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:33.881 [2024-07-26 14:34:53.471502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:29:33.881 [2024-07-26 14:34:53.471511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.471575] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:33.881 [2024-07-26 14:34:53.471631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:33.881 [2024-07-26 14:34:53.471673] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:33.881 [2024-07-26 14:34:53.471691] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:33.881 [2024-07-26 14:34:53.471833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:33.881 [2024-07-26 14:34:53.471850] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:33.881 [2024-07-26 14:34:53.471865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:33.881 [2024-07-26 14:34:53.471880] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:33.881 [2024-07-26 14:34:53.471893] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:33.881 [2024-07-26 14:34:53.471905] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:33.881 [2024-07-26 14:34:53.471915] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:33.881 [2024-07-26 14:34:53.471930] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:33.881 [2024-07-26 14:34:53.471941] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:33.881 [2024-07-26 14:34:53.471952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.471963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:33.881 [2024-07-26 14:34:53.471974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:29:33.881 [2024-07-26 14:34:53.471990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.472114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.881 [2024-07-26 14:34:53.472132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:33.881 [2024-07-26 14:34:53.472143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:33.881 [2024-07-26 14:34:53.472154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.881 [2024-07-26 14:34:53.472262] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:33.881 [2024-07-26 14:34:53.472279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:33.881 [2024-07-26 14:34:53.472291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.881 [2024-07-26 14:34:53.472302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:33.881 [2024-07-26 14:34:53.472324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472334] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:33.881 [2024-07-26 14:34:53.472344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:33.881 [2024-07-26 14:34:53.472355] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.881 [2024-07-26 14:34:53.472375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:33.881 [2024-07-26 14:34:53.472385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:33.881 [2024-07-26 14:34:53.472395] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.881 [2024-07-26 14:34:53.472420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:33.881 [2024-07-26 14:34:53.472430] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:33.881 [2024-07-26 14:34:53.472457] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:33.881 [2024-07-26 14:34:53.472478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:33.881 [2024-07-26 14:34:53.472487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472498] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:33.881 [2024-07-26 14:34:53.472508] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472518] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.881 [2024-07-26 14:34:53.472542] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:33.881 [2024-07-26 14:34:53.472553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:33.881 [2024-07-26 14:34:53.472563] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.882 [2024-07-26 14:34:53.472574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:33.882 [2024-07-26 14:34:53.472584] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472594] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.882 [2024-07-26 14:34:53.472604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:33.882 [2024-07-26 14:34:53.472614] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472624] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.882 [2024-07-26 14:34:53.472634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:33.882 [2024-07-26 14:34:53.472644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.882 [2024-07-26 14:34:53.472664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:33.882 [2024-07-26 14:34:53.472675] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:33.882 [2024-07-26 14:34:53.472685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.882 [2024-07-26 14:34:53.472695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:33.882 [2024-07-26 14:34:53.472705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:33.882 [2024-07-26 14:34:53.472715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:33.882 [2024-07-26 14:34:53.472735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:33.882 [2024-07-26 14:34:53.472745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472754] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:33.882 [2024-07-26 14:34:53.472765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:33.882 [2024-07-26 14:34:53.472775] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.882 [2024-07-26 14:34:53.472786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.882 [2024-07-26 14:34:53.472798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:33.882 [2024-07-26 14:34:53.472810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:33.882 [2024-07-26 14:34:53.472835] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:33.882 [2024-07-26 14:34:53.472845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:33.882 [2024-07-26 14:34:53.472855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:33.882 [2024-07-26 14:34:53.472865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:33.882 [2024-07-26 14:34:53.472876] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:33.882 [2024-07-26 14:34:53.472889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.472906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:33.882 [2024-07-26 14:34:53.472916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:33.882 [2024-07-26 14:34:53.472943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:33.882 [2024-07-26 14:34:53.472954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:33.882 [2024-07-26 14:34:53.472981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:33.882 [2024-07-26 14:34:53.472993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:33.882 [2024-07-26 14:34:53.473003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:33.882 [2024-07-26 14:34:53.473015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:33.882 [2024-07-26 14:34:53.473025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:33.882 [2024-07-26 14:34:53.473036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:33.882 [2024-07-26 14:34:53.473092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:33.882 [2024-07-26 14:34:53.473104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.882 [2024-07-26 14:34:53.473127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:33.882 [2024-07-26 14:34:53.473138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:33.882 [2024-07-26 14:34:53.473150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:33.882 [2024-07-26 14:34:53.473162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.473173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:33.882 [2024-07-26 14:34:53.473185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:29:33.882 [2024-07-26 14:34:53.473195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.510997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.511048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.882 [2024-07-26 14:34:53.511101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.727 ms 00:29:33.882 [2024-07-26 14:34:53.511112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.511223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.511239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.882 [2024-07-26 14:34:53.511251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:33.882 [2024-07-26 14:34:53.511268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.544093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.544147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.882 [2024-07-26 14:34:53.544180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.668 ms 00:29:33.882 [2024-07-26 14:34:53.544191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.544257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.544273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.882 [2024-07-26 14:34:53.544284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:33.882 [2024-07-26 14:34:53.544294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.544448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.544465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.882 [2024-07-26 14:34:53.544476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:33.882 [2024-07-26 14:34:53.544486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.544616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.544633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.882 [2024-07-26 14:34:53.544648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:29:33.882 [2024-07-26 14:34:53.544657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.558542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.558582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.882 [2024-07-26 14:34:53.558614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.861 ms 00:29:33.882 [2024-07-26 14:34:53.558624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.882 [2024-07-26 14:34:53.558794] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:33.882 [2024-07-26 14:34:53.558816] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:33.882 [2024-07-26 14:34:53.558828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.882 [2024-07-26 14:34:53.558842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:33.882 [2024-07-26 14:34:53.558853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:33.882 [2024-07-26 14:34:53.558862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.570721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.570751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:33.883 [2024-07-26 14:34:53.570781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.835 ms 00:29:33.883 [2024-07-26 14:34:53.570791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.570896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.571101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:33.883 [2024-07-26 14:34:53.571161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:29:33.883 [2024-07-26 14:34:53.571177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.571258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.571277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:33.883 [2024-07-26 14:34:53.571290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:33.883 [2024-07-26 14:34:53.571300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.572057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.572112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.883 [2024-07-26 14:34:53.572142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:29:33.883 [2024-07-26 14:34:53.572153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.572175] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:29:33.883 [2024-07-26 14:34:53.572196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.572218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:33.883 [2024-07-26 14:34:53.572230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:33.883 [2024-07-26 14:34:53.572241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.584519] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:33.883 [2024-07-26 14:34:53.584762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.584781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:33.883 [2024-07-26 14:34:53.584794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.495 ms 00:29:33.883 [2024-07-26 14:34:53.584805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.587167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.587201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:33.883 [2024-07-26 14:34:53.587247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:29:33.883 [2024-07-26 14:34:53.587257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.587375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.587393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:33.883 [2024-07-26 14:34:53.587404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:29:33.883 [2024-07-26 14:34:53.587428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.587457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.587471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:33.883 [2024-07-26 14:34:53.587487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:33.883 [2024-07-26 14:34:53.587496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.587529] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:33.883 [2024-07-26 14:34:53.587544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.587554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:33.883 [2024-07-26 14:34:53.587564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:33.883 [2024-07-26 14:34:53.587573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.618358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.618753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:33.883 [2024-07-26 14:34:53.618791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.763 ms 00:29:33.883 [2024-07-26 14:34:53.618806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.618969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.883 [2024-07-26 14:34:53.618990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:33.883 [2024-07-26 14:34:53.619003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:29:33.883 [2024-07-26 14:34:53.619016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.883 [2024-07-26 14:34:53.620215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.303 ms, result 0 00:30:17.641  Copying: 23/1024 [MB] (23 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 71/1024 [MB] (23 MBps) Copying: 95/1024 [MB] (24 MBps) Copying: 118/1024 [MB] (23 MBps) Copying: 143/1024 [MB] (24 MBps) Copying: 167/1024 [MB] (23 MBps) Copying: 191/1024 [MB] (24 MBps) Copying: 214/1024 [MB] (23 MBps) Copying: 238/1024 [MB] (23 MBps) Copying: 261/1024 [MB] (23 MBps) Copying: 285/1024 [MB] (23 MBps) Copying: 308/1024 [MB] (23 MBps) Copying: 332/1024 [MB] (23 MBps) Copying: 355/1024 [MB] (23 MBps) Copying: 379/1024 [MB] (23 MBps) Copying: 402/1024 [MB] (23 MBps) Copying: 425/1024 [MB] (23 MBps) Copying: 449/1024 [MB] (23 MBps) Copying: 473/1024 [MB] (23 MBps) Copying: 496/1024 [MB] (23 MBps) Copying: 520/1024 [MB] (23 MBps) Copying: 544/1024 [MB] (23 MBps) Copying: 567/1024 [MB] (23 MBps) Copying: 591/1024 [MB] (23 MBps) Copying: 614/1024 [MB] (23 MBps) Copying: 639/1024 [MB] (24 MBps) Copying: 662/1024 [MB] (23 MBps) Copying: 686/1024 [MB] (24 MBps) Copying: 710/1024 [MB] (24 MBps) Copying: 734/1024 [MB] (24 MBps) Copying: 758/1024 [MB] (23 MBps) Copying: 782/1024 [MB] (23 MBps) Copying: 806/1024 [MB] (24 MBps) Copying: 830/1024 [MB] (23 MBps) Copying: 853/1024 [MB] (23 MBps) Copying: 877/1024 [MB] (23 MBps) Copying: 900/1024 [MB] (23 MBps) Copying: 923/1024 [MB] (23 MBps) Copying: 947/1024 [MB] (24 MBps) Copying: 972/1024 [MB] (24 MBps) Copying: 995/1024 [MB] (23 MBps) Copying: 1019/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:35:37.356087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.641 [2024-07-26 14:35:37.356808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:17.641 [2024-07-26 14:35:37.357520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:17.641 [2024-07-26 14:35:37.357632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.641 [2024-07-26 14:35:37.357744] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:17.641 [2024-07-26 14:35:37.361253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.641 [2024-07-26 14:35:37.361370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:17.641 [2024-07-26 14:35:37.361454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:30:17.641 [2024-07-26 14:35:37.361530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.641 [2024-07-26 14:35:37.361840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.641 [2024-07-26 14:35:37.361978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:17.641 [2024-07-26 14:35:37.362067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:30:17.641 [2024-07-26 14:35:37.362141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.641 [2024-07-26 14:35:37.362241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.641 [2024-07-26 14:35:37.362344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:17.641 [2024-07-26 14:35:37.362449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:17.641 [2024-07-26 14:35:37.362530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.641 [2024-07-26 14:35:37.362667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.641 [2024-07-26 14:35:37.362764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:17.641 [2024-07-26 14:35:37.362862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:17.641 [2024-07-26 14:35:37.362975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.641 [2024-07-26 14:35:37.363078] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:17.641 [2024-07-26 14:35:37.363183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.363994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:17.641 [2024-07-26 14:35:37.364728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.365996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:17.642 [2024-07-26 14:35:37.366445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:17.642 [2024-07-26 14:35:37.366462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:30:17.642 [2024-07-26 14:35:37.366475] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:17.642 [2024-07-26 14:35:37.366485] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:17.642 [2024-07-26 14:35:37.366496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:17.642 [2024-07-26 14:35:37.366508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:17.642 [2024-07-26 14:35:37.366518] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:17.642 [2024-07-26 14:35:37.366529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:17.642 [2024-07-26 14:35:37.366540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:17.642 [2024-07-26 14:35:37.366551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:17.642 [2024-07-26 14:35:37.366561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:17.642 [2024-07-26 14:35:37.366573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.642 [2024-07-26 14:35:37.366585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:17.642 [2024-07-26 14:35:37.366597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:30:17.642 [2024-07-26 14:35:37.366609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.642 [2024-07-26 14:35:37.383679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.642 [2024-07-26 14:35:37.383749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:17.642 [2024-07-26 14:35:37.383766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:30:17.642 [2024-07-26 14:35:37.383776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.642 [2024-07-26 14:35:37.384295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:17.642 [2024-07-26 14:35:37.384349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:17.642 [2024-07-26 14:35:37.384363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:30:17.642 [2024-07-26 14:35:37.384381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.909 [2024-07-26 14:35:37.421887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.909 [2024-07-26 14:35:37.421997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:17.909 [2024-07-26 14:35:37.422016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.909 [2024-07-26 14:35:37.422027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.909 [2024-07-26 14:35:37.422097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.909 [2024-07-26 14:35:37.422112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:17.909 [2024-07-26 14:35:37.422123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.909 [2024-07-26 14:35:37.422139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.909 [2024-07-26 14:35:37.422207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.422225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:17.910 [2024-07-26 14:35:37.422237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.422248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.422268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.422296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:17.910 [2024-07-26 14:35:37.422307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.422333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.510592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.510681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:17.910 [2024-07-26 14:35:37.510698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.510708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.591645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.591716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:17.910 [2024-07-26 14:35:37.591732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.591748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.591844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.591859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:17.910 [2024-07-26 14:35:37.591870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.591879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.591974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.591993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:17.910 [2024-07-26 14:35:37.592005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.592017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.592129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.592150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:17.910 [2024-07-26 14:35:37.592162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.592173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.592215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.592232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:17.910 [2024-07-26 14:35:37.592244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.592256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.592303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.592353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:17.910 [2024-07-26 14:35:37.592378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.592402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.592446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:17.910 [2024-07-26 14:35:37.592460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:17.910 [2024-07-26 14:35:37.592471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:17.910 [2024-07-26 14:35:37.592480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:17.910 [2024-07-26 14:35:37.592606] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 236.537 ms, result 0 00:30:18.846 00:30:18.846 00:30:18.846 14:35:38 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:20.750 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:20.750 14:35:40 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:21.008 [2024-07-26 14:35:40.602537] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:21.008 [2024-07-26 14:35:40.602716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86433 ] 00:30:21.267 [2024-07-26 14:35:40.775666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.267 [2024-07-26 14:35:40.975522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.526 [2024-07-26 14:35:41.250714] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:21.526 [2024-07-26 14:35:41.250816] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:21.786 [2024-07-26 14:35:41.409345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.409415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:21.786 [2024-07-26 14:35:41.409449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:21.786 [2024-07-26 14:35:41.409459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.409515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.409532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:21.786 [2024-07-26 14:35:41.409542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:21.786 [2024-07-26 14:35:41.409554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.409585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:21.786 [2024-07-26 14:35:41.410516] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:21.786 [2024-07-26 14:35:41.410585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.410597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:21.786 [2024-07-26 14:35:41.410608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:30:21.786 [2024-07-26 14:35:41.410617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.411086] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:21.786 [2024-07-26 14:35:41.411129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.411142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:21.786 [2024-07-26 14:35:41.411160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:21.786 [2024-07-26 14:35:41.411171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.411238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.411254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:21.786 [2024-07-26 14:35:41.411265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:21.786 [2024-07-26 14:35:41.411289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.411702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.411729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:21.786 [2024-07-26 14:35:41.411746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:30:21.786 [2024-07-26 14:35:41.411756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.411832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.411849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:21.786 [2024-07-26 14:35:41.411860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:21.786 [2024-07-26 14:35:41.411870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.411946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.411963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:21.786 [2024-07-26 14:35:41.411974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:30:21.786 [2024-07-26 14:35:41.411984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.412017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:21.786 [2024-07-26 14:35:41.415990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.416026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:21.786 [2024-07-26 14:35:41.416079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:30:21.786 [2024-07-26 14:35:41.416090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.416140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.786 [2024-07-26 14:35:41.416158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:21.786 [2024-07-26 14:35:41.416170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:21.786 [2024-07-26 14:35:41.416180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.786 [2024-07-26 14:35:41.416246] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:21.786 [2024-07-26 14:35:41.416278] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:21.786 [2024-07-26 14:35:41.416323] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:21.786 [2024-07-26 14:35:41.416343] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:21.786 [2024-07-26 14:35:41.416485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:21.786 [2024-07-26 14:35:41.416501] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:21.786 [2024-07-26 14:35:41.416516] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:21.786 [2024-07-26 14:35:41.416529] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:21.786 [2024-07-26 14:35:41.416541] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:21.786 [2024-07-26 14:35:41.416551] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:21.786 [2024-07-26 14:35:41.416561] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:21.787 [2024-07-26 14:35:41.416575] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:21.787 [2024-07-26 14:35:41.416584] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:21.787 [2024-07-26 14:35:41.416594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.787 [2024-07-26 14:35:41.416604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:21.787 [2024-07-26 14:35:41.416615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:30:21.787 [2024-07-26 14:35:41.416624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.787 [2024-07-26 14:35:41.416710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.787 [2024-07-26 14:35:41.416724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:21.787 [2024-07-26 14:35:41.416735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:21.787 [2024-07-26 14:35:41.416744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.787 [2024-07-26 14:35:41.416840] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:21.787 [2024-07-26 14:35:41.416861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:21.787 [2024-07-26 14:35:41.416873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:21.787 [2024-07-26 14:35:41.416884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.416907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:21.787 [2024-07-26 14:35:41.416919] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.416930] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:21.787 [2024-07-26 14:35:41.416939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:21.787 [2024-07-26 14:35:41.416949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:21.787 [2024-07-26 14:35:41.416958] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:21.787 [2024-07-26 14:35:41.416967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:21.787 [2024-07-26 14:35:41.416976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:21.787 [2024-07-26 14:35:41.416985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:21.787 [2024-07-26 14:35:41.416995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:21.787 [2024-07-26 14:35:41.417006] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:21.787 [2024-07-26 14:35:41.417015] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:21.787 [2024-07-26 14:35:41.417034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:21.787 [2024-07-26 14:35:41.417061] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417070] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417093] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:21.787 [2024-07-26 14:35:41.417103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:21.787 [2024-07-26 14:35:41.417130] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417149] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:21.787 [2024-07-26 14:35:41.417158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:21.787 [2024-07-26 14:35:41.417185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417194] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:21.787 [2024-07-26 14:35:41.417203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:21.787 [2024-07-26 14:35:41.417212] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:21.787 [2024-07-26 14:35:41.417221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:21.787 [2024-07-26 14:35:41.417230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:21.787 [2024-07-26 14:35:41.417239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:21.787 [2024-07-26 14:35:41.417248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:21.787 [2024-07-26 14:35:41.417267] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:21.787 [2024-07-26 14:35:41.417275] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417284] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:21.787 [2024-07-26 14:35:41.417294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:21.787 [2024-07-26 14:35:41.417304] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417316] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:21.787 [2024-07-26 14:35:41.417326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:21.787 [2024-07-26 14:35:41.417335] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:21.787 [2024-07-26 14:35:41.417345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:21.787 [2024-07-26 14:35:41.417354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:21.787 [2024-07-26 14:35:41.417363] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:21.787 [2024-07-26 14:35:41.417373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:21.787 [2024-07-26 14:35:41.417384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:21.787 [2024-07-26 14:35:41.417396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:21.787 [2024-07-26 14:35:41.417422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:21.787 [2024-07-26 14:35:41.417432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:21.787 [2024-07-26 14:35:41.417442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:21.787 [2024-07-26 14:35:41.417452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:21.787 [2024-07-26 14:35:41.417462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:21.787 [2024-07-26 14:35:41.417472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:21.787 [2024-07-26 14:35:41.417482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:21.787 [2024-07-26 14:35:41.417491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:21.787 [2024-07-26 14:35:41.417501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:21.787 [2024-07-26 14:35:41.417550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:21.787 [2024-07-26 14:35:41.417561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:21.787 [2024-07-26 14:35:41.417582] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:21.787 [2024-07-26 14:35:41.417592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:21.787 [2024-07-26 14:35:41.417603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:21.787 [2024-07-26 14:35:41.417613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.787 [2024-07-26 14:35:41.417624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:21.787 [2024-07-26 14:35:41.417634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:30:21.787 [2024-07-26 14:35:41.417644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.787 [2024-07-26 14:35:41.451206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.787 [2024-07-26 14:35:41.451269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:21.787 [2024-07-26 14:35:41.451302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.508 ms 00:30:21.787 [2024-07-26 14:35:41.451326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.787 [2024-07-26 14:35:41.451420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.787 [2024-07-26 14:35:41.451434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:21.787 [2024-07-26 14:35:41.451444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:21.787 [2024-07-26 14:35:41.451453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.484958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.485044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:21.788 [2024-07-26 14:35:41.485062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.396 ms 00:30:21.788 [2024-07-26 14:35:41.485074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.485130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.485151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:21.788 [2024-07-26 14:35:41.485164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:21.788 [2024-07-26 14:35:41.485174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.485349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.485368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:21.788 [2024-07-26 14:35:41.485380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:30:21.788 [2024-07-26 14:35:41.485391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.485538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.485558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:21.788 [2024-07-26 14:35:41.485573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:30:21.788 [2024-07-26 14:35:41.485584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.500965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.501035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:21.788 [2024-07-26 14:35:41.501071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.356 ms 00:30:21.788 [2024-07-26 14:35:41.501082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.501245] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:21.788 [2024-07-26 14:35:41.501298] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:21.788 [2024-07-26 14:35:41.501327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.501338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:21.788 [2024-07-26 14:35:41.501349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:30:21.788 [2024-07-26 14:35:41.501363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.513592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.513634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:21.788 [2024-07-26 14:35:41.513662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.209 ms 00:30:21.788 [2024-07-26 14:35:41.513672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.513796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.513810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:21.788 [2024-07-26 14:35:41.513821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:30:21.788 [2024-07-26 14:35:41.513829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.513935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.513972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:21.788 [2024-07-26 14:35:41.513987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:30:21.788 [2024-07-26 14:35:41.513997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.514680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.514719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:21.788 [2024-07-26 14:35:41.514732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:30:21.788 [2024-07-26 14:35:41.514741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.514765] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:21.788 [2024-07-26 14:35:41.514781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.514819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:21.788 [2024-07-26 14:35:41.514829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:21.788 [2024-07-26 14:35:41.514839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.525647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:21.788 [2024-07-26 14:35:41.525859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.525876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:21.788 [2024-07-26 14:35:41.525887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.998 ms 00:30:21.788 [2024-07-26 14:35:41.525896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.527834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.527877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:21.788 [2024-07-26 14:35:41.527915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.871 ms 00:30:21.788 [2024-07-26 14:35:41.527927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.528014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.528057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:21.788 [2024-07-26 14:35:41.528085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:21.788 [2024-07-26 14:35:41.528095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.528139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.528153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:21.788 [2024-07-26 14:35:41.528170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:30:21.788 [2024-07-26 14:35:41.528180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:21.788 [2024-07-26 14:35:41.528215] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:21.788 [2024-07-26 14:35:41.528231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:21.788 [2024-07-26 14:35:41.528241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:21.788 [2024-07-26 14:35:41.528252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:30:21.788 [2024-07-26 14:35:41.528262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.047 [2024-07-26 14:35:41.555670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.047 [2024-07-26 14:35:41.555727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:22.047 [2024-07-26 14:35:41.555757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.387 ms 00:30:22.047 [2024-07-26 14:35:41.555767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.047 [2024-07-26 14:35:41.555838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:22.047 [2024-07-26 14:35:41.555854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:22.047 [2024-07-26 14:35:41.555865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:22.048 [2024-07-26 14:35:41.555874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:22.048 [2024-07-26 14:35:41.557182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 147.265 ms, result 0 00:31:06.549  Copying: 23/1024 [MB] (23 MBps) Copying: 46/1024 [MB] (23 MBps) Copying: 69/1024 [MB] (23 MBps) Copying: 93/1024 [MB] (23 MBps) Copying: 116/1024 [MB] (23 MBps) Copying: 139/1024 [MB] (23 MBps) Copying: 163/1024 [MB] (23 MBps) Copying: 187/1024 [MB] (23 MBps) Copying: 210/1024 [MB] (23 MBps) Copying: 233/1024 [MB] (23 MBps) Copying: 256/1024 [MB] (22 MBps) Copying: 279/1024 [MB] (23 MBps) Copying: 304/1024 [MB] (24 MBps) Copying: 328/1024 [MB] (24 MBps) Copying: 352/1024 [MB] (24 MBps) Copying: 376/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (24 MBps) Copying: 424/1024 [MB] (23 MBps) Copying: 448/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 495/1024 [MB] (24 MBps) Copying: 520/1024 [MB] (25 MBps) Copying: 544/1024 [MB] (23 MBps) Copying: 568/1024 [MB] (23 MBps) Copying: 592/1024 [MB] (24 MBps) Copying: 616/1024 [MB] (23 MBps) Copying: 639/1024 [MB] (23 MBps) Copying: 663/1024 [MB] (23 MBps) Copying: 686/1024 [MB] (23 MBps) Copying: 710/1024 [MB] (23 MBps) Copying: 734/1024 [MB] (24 MBps) Copying: 757/1024 [MB] (23 MBps) Copying: 781/1024 [MB] (23 MBps) Copying: 804/1024 [MB] (23 MBps) Copying: 828/1024 [MB] (23 MBps) Copying: 852/1024 [MB] (23 MBps) Copying: 875/1024 [MB] (22 MBps) Copying: 898/1024 [MB] (23 MBps) Copying: 922/1024 [MB] (24 MBps) Copying: 946/1024 [MB] (23 MBps) Copying: 970/1024 [MB] (23 MBps) Copying: 994/1024 [MB] (24 MBps) Copying: 1017/1024 [MB] (23 MBps) Copying: 1048220/1048576 [kB] (5916 kBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:36:26.015851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.549 [2024-07-26 14:36:26.015957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:06.549 [2024-07-26 14:36:26.015994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:06.549 [2024-07-26 14:36:26.016005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.549 [2024-07-26 14:36:26.017658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:06.549 [2024-07-26 14:36:26.023089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.549 [2024-07-26 14:36:26.023141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:06.549 [2024-07-26 14:36:26.023173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.383 ms 00:31:06.549 [2024-07-26 14:36:26.023184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.549 [2024-07-26 14:36:26.033743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.549 [2024-07-26 14:36:26.033799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:06.549 [2024-07-26 14:36:26.033838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.079 ms 00:31:06.549 [2024-07-26 14:36:26.033849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.549 [2024-07-26 14:36:26.033881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.549 [2024-07-26 14:36:26.033894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:06.549 [2024-07-26 14:36:26.033905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:06.549 [2024-07-26 14:36:26.033929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.549 [2024-07-26 14:36:26.033982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.549 [2024-07-26 14:36:26.033996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:06.549 [2024-07-26 14:36:26.034007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:06.549 [2024-07-26 14:36:26.034020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.549 [2024-07-26 14:36:26.034037] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:06.549 [2024-07-26 14:36:26.034052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130816 / 261120 wr_cnt: 1 state: open 00:31:06.549 [2024-07-26 14:36:26.034095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:06.549 [2024-07-26 14:36:26.034401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.034999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:06.550 [2024-07-26 14:36:26.035226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:06.550 [2024-07-26 14:36:26.035244] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:31:06.550 [2024-07-26 14:36:26.035255] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130816 00:31:06.551 [2024-07-26 14:36:26.035265] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130848 00:31:06.551 [2024-07-26 14:36:26.035276] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130816 00:31:06.551 [2024-07-26 14:36:26.035287] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:06.551 [2024-07-26 14:36:26.035297] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:06.551 [2024-07-26 14:36:26.035308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:06.551 [2024-07-26 14:36:26.035322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:06.551 [2024-07-26 14:36:26.035332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:06.551 [2024-07-26 14:36:26.035341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:06.551 [2024-07-26 14:36:26.035352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.551 [2024-07-26 14:36:26.035362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:06.551 [2024-07-26 14:36:26.035372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:31:06.551 [2024-07-26 14:36:26.035382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.050144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.551 [2024-07-26 14:36:26.050196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:06.551 [2024-07-26 14:36:26.050226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:31:06.551 [2024-07-26 14:36:26.050236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.050650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.551 [2024-07-26 14:36:26.050674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:06.551 [2024-07-26 14:36:26.050686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:31:06.551 [2024-07-26 14:36:26.050701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.080894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.080963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:06.551 [2024-07-26 14:36:26.080998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.081008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.081070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.081083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:06.551 [2024-07-26 14:36:26.081093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.081102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.081164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.081180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:06.551 [2024-07-26 14:36:26.081190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.081220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.081256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.081284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:06.551 [2024-07-26 14:36:26.081294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.081303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.159562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.159639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:06.551 [2024-07-26 14:36:26.159671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.159687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.225979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:06.551 [2024-07-26 14:36:26.226090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:06.551 [2024-07-26 14:36:26.226230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:06.551 [2024-07-26 14:36:26.226307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:06.551 [2024-07-26 14:36:26.226483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:06.551 [2024-07-26 14:36:26.226559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:06.551 [2024-07-26 14:36:26.226632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:06.551 [2024-07-26 14:36:26.226712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:06.551 [2024-07-26 14:36:26.226723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:06.551 [2024-07-26 14:36:26.226733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.551 [2024-07-26 14:36:26.226859] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.939 ms, result 0 00:31:07.926 00:31:07.926 00:31:07.926 14:36:27 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:07.926 [2024-07-26 14:36:27.587717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:07.926 [2024-07-26 14:36:27.587924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86878 ] 00:31:08.184 [2024-07-26 14:36:27.759752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.184 [2024-07-26 14:36:27.916903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.443 [2024-07-26 14:36:28.199354] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:08.443 [2024-07-26 14:36:28.199453] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:08.703 [2024-07-26 14:36:28.356523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.356594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:08.703 [2024-07-26 14:36:28.356629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:08.703 [2024-07-26 14:36:28.356640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.356701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.356718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:08.703 [2024-07-26 14:36:28.356729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:08.703 [2024-07-26 14:36:28.356742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.356775] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:08.703 [2024-07-26 14:36:28.357749] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:08.703 [2024-07-26 14:36:28.357821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.357837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:08.703 [2024-07-26 14:36:28.357849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:31:08.703 [2024-07-26 14:36:28.357860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.358383] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:08.703 [2024-07-26 14:36:28.358454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.358468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:08.703 [2024-07-26 14:36:28.358501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:31:08.703 [2024-07-26 14:36:28.358512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.358567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.358584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:08.703 [2024-07-26 14:36:28.358595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:08.703 [2024-07-26 14:36:28.358605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.359093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.359122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:08.703 [2024-07-26 14:36:28.359141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:31:08.703 [2024-07-26 14:36:28.359152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.359253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.359297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:08.703 [2024-07-26 14:36:28.359311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:08.703 [2024-07-26 14:36:28.359321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.359356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.359371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:08.703 [2024-07-26 14:36:28.359383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:08.703 [2024-07-26 14:36:28.359393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.359425] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:08.703 [2024-07-26 14:36:28.363951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.364015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:08.703 [2024-07-26 14:36:28.364056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.533 ms 00:31:08.703 [2024-07-26 14:36:28.364068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.364112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.364129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:08.703 [2024-07-26 14:36:28.364141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:08.703 [2024-07-26 14:36:28.364152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.364220] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:08.703 [2024-07-26 14:36:28.364255] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:08.703 [2024-07-26 14:36:28.364300] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:08.703 [2024-07-26 14:36:28.364334] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:08.703 [2024-07-26 14:36:28.364447] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:08.703 [2024-07-26 14:36:28.364462] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:08.703 [2024-07-26 14:36:28.364476] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:08.703 [2024-07-26 14:36:28.364490] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:08.703 [2024-07-26 14:36:28.364502] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:08.703 [2024-07-26 14:36:28.364513] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:08.703 [2024-07-26 14:36:28.364523] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:08.703 [2024-07-26 14:36:28.364537] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:08.703 [2024-07-26 14:36:28.364547] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:08.703 [2024-07-26 14:36:28.364558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.364568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:08.703 [2024-07-26 14:36:28.364579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:31:08.703 [2024-07-26 14:36:28.364588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.364677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.703 [2024-07-26 14:36:28.364691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:08.703 [2024-07-26 14:36:28.364703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:08.703 [2024-07-26 14:36:28.364712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.703 [2024-07-26 14:36:28.364851] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:08.703 [2024-07-26 14:36:28.364883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:08.703 [2024-07-26 14:36:28.364913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:08.703 [2024-07-26 14:36:28.364927] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.703 [2024-07-26 14:36:28.364939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:08.703 [2024-07-26 14:36:28.364949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:08.703 [2024-07-26 14:36:28.364959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:08.703 [2024-07-26 14:36:28.364969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:08.703 [2024-07-26 14:36:28.364979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:08.704 [2024-07-26 14:36:28.364989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:08.704 [2024-07-26 14:36:28.365000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:08.704 [2024-07-26 14:36:28.365010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:08.704 [2024-07-26 14:36:28.365020] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:08.704 [2024-07-26 14:36:28.365030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:08.704 [2024-07-26 14:36:28.365040] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:08.704 [2024-07-26 14:36:28.365052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:08.704 [2024-07-26 14:36:28.365072] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:08.704 [2024-07-26 14:36:28.365102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365112] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:08.704 [2024-07-26 14:36:28.365147] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:08.704 [2024-07-26 14:36:28.365206] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365215] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365225] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:08.704 [2024-07-26 14:36:28.365234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:08.704 [2024-07-26 14:36:28.365262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:08.704 [2024-07-26 14:36:28.365282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:08.704 [2024-07-26 14:36:28.365292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:08.704 [2024-07-26 14:36:28.365301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:08.704 [2024-07-26 14:36:28.365311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:08.704 [2024-07-26 14:36:28.365320] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:08.704 [2024-07-26 14:36:28.365329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365339] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:08.704 [2024-07-26 14:36:28.365348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:08.704 [2024-07-26 14:36:28.365358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365366] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:08.704 [2024-07-26 14:36:28.365377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:08.704 [2024-07-26 14:36:28.365387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:08.704 [2024-07-26 14:36:28.365408] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:08.704 [2024-07-26 14:36:28.365417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:08.704 [2024-07-26 14:36:28.365427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:08.704 [2024-07-26 14:36:28.365436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:08.704 [2024-07-26 14:36:28.365446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:08.704 [2024-07-26 14:36:28.365455] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:08.704 [2024-07-26 14:36:28.365466] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:08.704 [2024-07-26 14:36:28.365479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:08.704 [2024-07-26 14:36:28.365506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:08.704 [2024-07-26 14:36:28.365517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:08.704 [2024-07-26 14:36:28.365527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:08.704 [2024-07-26 14:36:28.365537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:08.704 [2024-07-26 14:36:28.365548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:08.704 [2024-07-26 14:36:28.365558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:08.704 [2024-07-26 14:36:28.365569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:08.704 [2024-07-26 14:36:28.365579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:08.704 [2024-07-26 14:36:28.365589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:08.704 [2024-07-26 14:36:28.365641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:08.704 [2024-07-26 14:36:28.365653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:08.704 [2024-07-26 14:36:28.365675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:08.704 [2024-07-26 14:36:28.365685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:08.704 [2024-07-26 14:36:28.365696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:08.704 [2024-07-26 14:36:28.365707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.704 [2024-07-26 14:36:28.365717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:08.704 [2024-07-26 14:36:28.365728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:31:08.704 [2024-07-26 14:36:28.365738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.704 [2024-07-26 14:36:28.406038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.704 [2024-07-26 14:36:28.406107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.704 [2024-07-26 14:36:28.406144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.213 ms 00:31:08.704 [2024-07-26 14:36:28.406155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.704 [2024-07-26 14:36:28.406265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.704 [2024-07-26 14:36:28.406281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:08.704 [2024-07-26 14:36:28.406292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:08.704 [2024-07-26 14:36:28.406302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.704 [2024-07-26 14:36:28.440517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.704 [2024-07-26 14:36:28.440589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.704 [2024-07-26 14:36:28.440606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.084 ms 00:31:08.705 [2024-07-26 14:36:28.440617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.705 [2024-07-26 14:36:28.440681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.705 [2024-07-26 14:36:28.440702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.705 [2024-07-26 14:36:28.440714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:08.705 [2024-07-26 14:36:28.440724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.705 [2024-07-26 14:36:28.440923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.705 [2024-07-26 14:36:28.440958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.705 [2024-07-26 14:36:28.440974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:31:08.705 [2024-07-26 14:36:28.440985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.705 [2024-07-26 14:36:28.441134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.705 [2024-07-26 14:36:28.441160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.705 [2024-07-26 14:36:28.441191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:08.705 [2024-07-26 14:36:28.441202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.705 [2024-07-26 14:36:28.456833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.705 [2024-07-26 14:36:28.456918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.705 [2024-07-26 14:36:28.456942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.604 ms 00:31:08.705 [2024-07-26 14:36:28.456955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.705 [2024-07-26 14:36:28.457191] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:08.705 [2024-07-26 14:36:28.457230] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:08.705 [2024-07-26 14:36:28.457248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.705 [2024-07-26 14:36:28.457260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:08.705 [2024-07-26 14:36:28.457277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:31:08.705 [2024-07-26 14:36:28.457287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.471818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.471886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:08.964 [2024-07-26 14:36:28.471937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:31:08.964 [2024-07-26 14:36:28.471951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.472109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.472128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:08.964 [2024-07-26 14:36:28.472141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:31:08.964 [2024-07-26 14:36:28.472152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.472229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.472251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:08.964 [2024-07-26 14:36:28.472264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:08.964 [2024-07-26 14:36:28.472275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.473044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.473093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:08.964 [2024-07-26 14:36:28.473137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:31:08.964 [2024-07-26 14:36:28.473147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.473174] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:08.964 [2024-07-26 14:36:28.473191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.473207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:08.964 [2024-07-26 14:36:28.473229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:31:08.964 [2024-07-26 14:36:28.473238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.485255] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:08.964 [2024-07-26 14:36:28.485529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.485571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:08.964 [2024-07-26 14:36:28.485602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.267 ms 00:31:08.964 [2024-07-26 14:36:28.485613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.487662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.487708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:08.964 [2024-07-26 14:36:28.487741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:31:08.964 [2024-07-26 14:36:28.487752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.487852] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:08.964 [2024-07-26 14:36:28.488492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.488540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:08.964 [2024-07-26 14:36:28.488555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:31:08.964 [2024-07-26 14:36:28.488566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.488616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.488631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:08.964 [2024-07-26 14:36:28.488648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:08.964 [2024-07-26 14:36:28.488658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.488693] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:08.964 [2024-07-26 14:36:28.488710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.488720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:08.964 [2024-07-26 14:36:28.488730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:08.964 [2024-07-26 14:36:28.488740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.517384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.517447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:08.964 [2024-07-26 14:36:28.517479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.606 ms 00:31:08.964 [2024-07-26 14:36:28.517489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.517565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.964 [2024-07-26 14:36:28.517584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:08.964 [2024-07-26 14:36:28.517595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:08.964 [2024-07-26 14:36:28.517605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.964 [2024-07-26 14:36:28.527117] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 167.642 ms, result 0 00:31:52.648  Copying: 24/1024 [MB] (24 MBps) Copying: 47/1024 [MB] (23 MBps) Copying: 70/1024 [MB] (23 MBps) Copying: 94/1024 [MB] (23 MBps) Copying: 119/1024 [MB] (24 MBps) Copying: 142/1024 [MB] (23 MBps) Copying: 166/1024 [MB] (23 MBps) Copying: 189/1024 [MB] (23 MBps) Copying: 212/1024 [MB] (23 MBps) Copying: 236/1024 [MB] (23 MBps) Copying: 259/1024 [MB] (23 MBps) Copying: 283/1024 [MB] (23 MBps) Copying: 306/1024 [MB] (23 MBps) Copying: 329/1024 [MB] (23 MBps) Copying: 353/1024 [MB] (23 MBps) Copying: 377/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (23 MBps) Copying: 424/1024 [MB] (23 MBps) Copying: 448/1024 [MB] (23 MBps) Copying: 471/1024 [MB] (23 MBps) Copying: 495/1024 [MB] (23 MBps) Copying: 518/1024 [MB] (23 MBps) Copying: 542/1024 [MB] (23 MBps) Copying: 565/1024 [MB] (23 MBps) Copying: 589/1024 [MB] (23 MBps) Copying: 612/1024 [MB] (23 MBps) Copying: 636/1024 [MB] (23 MBps) Copying: 659/1024 [MB] (22 MBps) Copying: 682/1024 [MB] (23 MBps) Copying: 705/1024 [MB] (22 MBps) Copying: 728/1024 [MB] (23 MBps) Copying: 752/1024 [MB] (23 MBps) Copying: 775/1024 [MB] (23 MBps) Copying: 799/1024 [MB] (23 MBps) Copying: 823/1024 [MB] (23 MBps) Copying: 846/1024 [MB] (23 MBps) Copying: 870/1024 [MB] (23 MBps) Copying: 894/1024 [MB] (23 MBps) Copying: 918/1024 [MB] (24 MBps) Copying: 942/1024 [MB] (23 MBps) Copying: 965/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (23 MBps) Copying: 1012/1024 [MB] (23 MBps) Copying: 1024/1024 [MB] (average 23 MBps)[2024-07-26 14:37:12.397192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.648 [2024-07-26 14:37:12.397311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:52.648 [2024-07-26 14:37:12.397342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:52.648 [2024-07-26 14:37:12.397361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.648 [2024-07-26 14:37:12.397409] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:52.648 [2024-07-26 14:37:12.402756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.649 [2024-07-26 14:37:12.402809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:52.649 [2024-07-26 14:37:12.402836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.311 ms 00:31:52.649 [2024-07-26 14:37:12.402852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.649 [2024-07-26 14:37:12.403232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.649 [2024-07-26 14:37:12.403288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:52.649 [2024-07-26 14:37:12.403309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:31:52.649 [2024-07-26 14:37:12.403326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.649 [2024-07-26 14:37:12.403376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.649 [2024-07-26 14:37:12.403397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:52.649 [2024-07-26 14:37:12.403414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:52.649 [2024-07-26 14:37:12.403429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.649 [2024-07-26 14:37:12.403505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.649 [2024-07-26 14:37:12.403529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:52.649 [2024-07-26 14:37:12.403552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:52.649 [2024-07-26 14:37:12.403567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.649 [2024-07-26 14:37:12.403595] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:52.649 [2024-07-26 14:37:12.403619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:31:52.649 [2024-07-26 14:37:12.403640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.403989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:52.649 [2024-07-26 14:37:12.404936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.404952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.404969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.404985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:52.650 [2024-07-26 14:37:12.405396] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:52.650 [2024-07-26 14:37:12.405412] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b 00:31:52.650 [2024-07-26 14:37:12.405429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:31:52.650 [2024-07-26 14:37:12.405445] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:31:52.650 [2024-07-26 14:37:12.405460] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:31:52.650 [2024-07-26 14:37:12.405477] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:31:52.650 [2024-07-26 14:37:12.405492] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:52.650 [2024-07-26 14:37:12.405515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:52.650 [2024-07-26 14:37:12.405532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:52.650 [2024-07-26 14:37:12.405548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:52.650 [2024-07-26 14:37:12.405563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:52.650 [2024-07-26 14:37:12.405579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.650 [2024-07-26 14:37:12.405596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:52.650 [2024-07-26 14:37:12.405613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:31:52.650 [2024-07-26 14:37:12.405628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.423436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.909 [2024-07-26 14:37:12.423488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:52.909 [2024-07-26 14:37:12.423519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.777 ms 00:31:52.909 [2024-07-26 14:37:12.423536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.424057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.909 [2024-07-26 14:37:12.424094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:52.909 [2024-07-26 14:37:12.424109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.494 ms 00:31:52.909 [2024-07-26 14:37:12.424121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.457582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.457650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:52.909 [2024-07-26 14:37:12.457686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.457697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.457763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.457778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:52.909 [2024-07-26 14:37:12.457789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.457799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.457865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.457898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:52.909 [2024-07-26 14:37:12.457923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.457986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.458014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.458027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:52.909 [2024-07-26 14:37:12.458038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.458048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.545334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.545405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:52.909 [2024-07-26 14:37:12.545445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.545457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:52.909 [2024-07-26 14:37:12.622254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:52.909 [2024-07-26 14:37:12.622397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:52.909 [2024-07-26 14:37:12.622511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:52.909 [2024-07-26 14:37:12.622693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:52.909 [2024-07-26 14:37:12.622777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:52.909 [2024-07-26 14:37:12.622857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.622921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:52.909 [2024-07-26 14:37:12.622938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:52.909 [2024-07-26 14:37:12.622949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:52.909 [2024-07-26 14:37:12.622960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.909 [2024-07-26 14:37:12.623138] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 225.931 ms, result 0 00:31:53.896 00:31:53.896 00:31:53.896 14:37:13 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:55.799 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:55.799 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:55.799 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:55.799 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:56.058 Process with pid 85319 is not found 00:31:56.058 Remove shared memory files 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 85319 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 85319 ']' 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 85319 00:31:56.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85319) - No such process 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 85319 is not found' 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_band_md /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_l2p_l1 /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_l2p_l2 /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_l2p_l2_ctx /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_nvc_md /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_p2l_pool /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_sb /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_sb_shm /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_trim_bitmap /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_trim_log /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_trim_md /dev/hugepages/ftl_6d53156d-d1e7-41e4-a7cc-dd2cf5c3ca8b_vmap 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:56.058 00:31:56.058 real 3m27.544s 00:31:56.058 user 3m14.420s 00:31:56.058 sys 0m14.323s 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:56.058 14:37:15 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:56.058 ************************************ 00:31:56.058 END TEST ftl_restore_fast 00:31:56.058 ************************************ 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@14 -- # killprocess 77360 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@950 -- # '[' -z 77360 ']' 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@954 -- # kill -0 77360 00:31:56.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (77360) - No such process 00:31:56.058 Process with pid 77360 is not found 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 77360 is not found' 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=87386 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:56.058 14:37:15 ftl -- ftl/ftl.sh@20 -- # waitforlisten 87386 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@831 -- # '[' -z 87386 ']' 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:56.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:56.058 14:37:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:56.317 [2024-07-26 14:37:15.858288] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:56.317 [2024-07-26 14:37:15.858439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87386 ] 00:31:56.317 [2024-07-26 14:37:16.022594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.576 [2024-07-26 14:37:16.242699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.144 14:37:16 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:57.144 14:37:16 ftl -- common/autotest_common.sh@864 -- # return 0 00:31:57.144 14:37:16 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:57.402 nvme0n1 00:31:57.660 14:37:17 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:57.660 14:37:17 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:57.660 14:37:17 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:57.660 14:37:17 ftl -- ftl/common.sh@28 -- # stores=c95932c0-67d7-4e25-9d17-5b6082c6043a 00:31:57.660 14:37:17 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:57.660 14:37:17 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c95932c0-67d7-4e25-9d17-5b6082c6043a 00:31:57.919 14:37:17 ftl -- ftl/ftl.sh@23 -- # killprocess 87386 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@950 -- # '[' -z 87386 ']' 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@954 -- # kill -0 87386 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@955 -- # uname 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87386 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:57.919 killing process with pid 87386 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87386' 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@969 -- # kill 87386 00:31:57.919 14:37:17 ftl -- common/autotest_common.sh@974 -- # wait 87386 00:31:59.821 14:37:19 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:00.079 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.079 Waiting for block devices as requested 00:32:00.079 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:00.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:00.338 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:00.338 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:05.652 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:05.652 14:37:25 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:05.652 Remove shared memory files 00:32:05.652 14:37:25 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:05.652 14:37:25 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:05.652 14:37:25 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:05.652 14:37:25 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:05.652 14:37:25 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:05.652 14:37:25 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:05.652 00:32:05.652 real 15m26.797s 00:32:05.652 user 18m11.028s 00:32:05.652 sys 1m39.184s 00:32:05.652 14:37:25 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.652 14:37:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:05.652 ************************************ 00:32:05.652 END TEST ftl 00:32:05.652 ************************************ 00:32:05.652 14:37:25 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:05.652 14:37:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:05.652 14:37:25 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:05.652 14:37:25 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:05.652 14:37:25 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:05.652 14:37:25 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:05.652 14:37:25 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:05.652 14:37:25 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:05.652 14:37:25 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:05.652 14:37:25 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:05.652 14:37:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.652 14:37:25 -- common/autotest_common.sh@10 -- # set +x 00:32:05.652 14:37:25 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:05.652 14:37:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:05.652 14:37:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:05.652 14:37:25 -- common/autotest_common.sh@10 -- # set +x 00:32:07.031 INFO: APP EXITING 00:32:07.031 INFO: killing all VMs 00:32:07.031 INFO: killing vhost app 00:32:07.031 INFO: EXIT DONE 00:32:07.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:07.858 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:07.858 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:07.858 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:07.858 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:08.117 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:08.686 Cleaning 00:32:08.686 Removing: /var/run/dpdk/spdk0/config 00:32:08.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:08.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:08.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:08.686 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:08.686 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:08.686 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:08.686 Removing: /var/run/dpdk/spdk0 00:32:08.686 Removing: /var/run/dpdk/spdk_pid61774 00:32:08.686 Removing: /var/run/dpdk/spdk_pid61979 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62189 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62293 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62333 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62461 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62479 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62654 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62739 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62827 00:32:08.686 Removing: /var/run/dpdk/spdk_pid62941 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63030 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63070 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63106 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63174 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63280 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63714 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63785 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63854 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63870 00:32:08.686 Removing: /var/run/dpdk/spdk_pid63986 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64002 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64126 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64142 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64201 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64219 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64284 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64302 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64465 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64507 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64587 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64744 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64834 00:32:08.686 Removing: /var/run/dpdk/spdk_pid64876 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65332 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65430 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65534 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65587 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65618 00:32:08.686 Removing: /var/run/dpdk/spdk_pid65694 00:32:08.686 Removing: /var/run/dpdk/spdk_pid66320 00:32:08.686 Removing: /var/run/dpdk/spdk_pid66363 00:32:08.686 Removing: /var/run/dpdk/spdk_pid66861 00:32:08.686 Removing: /var/run/dpdk/spdk_pid66960 00:32:08.686 Removing: /var/run/dpdk/spdk_pid67074 00:32:08.686 Removing: /var/run/dpdk/spdk_pid67133 00:32:08.686 Removing: /var/run/dpdk/spdk_pid67163 00:32:08.686 Removing: /var/run/dpdk/spdk_pid67189 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69052 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69189 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69194 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69212 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69255 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69260 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69272 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69317 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69321 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69333 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69378 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69382 00:32:08.686 Removing: /var/run/dpdk/spdk_pid69394 00:32:08.686 Removing: /var/run/dpdk/spdk_pid70750 00:32:08.686 Removing: /var/run/dpdk/spdk_pid70845 00:32:08.686 Removing: /var/run/dpdk/spdk_pid72245 00:32:08.686 Removing: /var/run/dpdk/spdk_pid73606 00:32:08.686 Removing: /var/run/dpdk/spdk_pid73720 00:32:08.686 Removing: /var/run/dpdk/spdk_pid73832 00:32:08.686 Removing: /var/run/dpdk/spdk_pid73941 00:32:08.686 Removing: /var/run/dpdk/spdk_pid74075 00:32:08.686 Removing: /var/run/dpdk/spdk_pid74150 00:32:08.686 Removing: /var/run/dpdk/spdk_pid74291 00:32:08.686 Removing: /var/run/dpdk/spdk_pid74656 00:32:08.686 Removing: /var/run/dpdk/spdk_pid74687 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75157 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75342 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75444 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75557 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75610 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75641 00:32:08.686 Removing: /var/run/dpdk/spdk_pid75924 00:32:08.946 Removing: /var/run/dpdk/spdk_pid75973 00:32:08.946 Removing: /var/run/dpdk/spdk_pid76047 00:32:08.946 Removing: /var/run/dpdk/spdk_pid76425 00:32:08.946 Removing: /var/run/dpdk/spdk_pid76572 00:32:08.946 Removing: /var/run/dpdk/spdk_pid77360 00:32:08.946 Removing: /var/run/dpdk/spdk_pid77491 00:32:08.946 Removing: /var/run/dpdk/spdk_pid77673 00:32:08.946 Removing: /var/run/dpdk/spdk_pid77776 00:32:08.946 Removing: /var/run/dpdk/spdk_pid78146 00:32:08.946 Removing: /var/run/dpdk/spdk_pid78410 00:32:08.946 Removing: /var/run/dpdk/spdk_pid78765 00:32:08.946 Removing: /var/run/dpdk/spdk_pid78958 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79098 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79152 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79307 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79332 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79396 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79597 00:32:08.946 Removing: /var/run/dpdk/spdk_pid79823 00:32:08.946 Removing: /var/run/dpdk/spdk_pid80281 00:32:08.946 Removing: /var/run/dpdk/spdk_pid80762 00:32:08.946 Removing: /var/run/dpdk/spdk_pid81231 00:32:08.946 Removing: /var/run/dpdk/spdk_pid81774 00:32:08.946 Removing: /var/run/dpdk/spdk_pid81911 00:32:08.946 Removing: /var/run/dpdk/spdk_pid82007 00:32:08.946 Removing: /var/run/dpdk/spdk_pid82719 00:32:08.946 Removing: /var/run/dpdk/spdk_pid82790 00:32:08.946 Removing: /var/run/dpdk/spdk_pid83278 00:32:08.946 Removing: /var/run/dpdk/spdk_pid83720 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84263 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84383 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84438 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84508 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84565 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84635 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84845 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84914 00:32:08.946 Removing: /var/run/dpdk/spdk_pid84982 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85055 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85090 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85157 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85319 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85528 00:32:08.946 Removing: /var/run/dpdk/spdk_pid85973 00:32:08.946 Removing: /var/run/dpdk/spdk_pid86433 00:32:08.946 Removing: /var/run/dpdk/spdk_pid86878 00:32:08.946 Removing: /var/run/dpdk/spdk_pid87386 00:32:08.946 Clean 00:32:08.946 14:37:28 -- common/autotest_common.sh@1451 -- # return 0 00:32:08.946 14:37:28 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:32:08.946 14:37:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.946 14:37:28 -- common/autotest_common.sh@10 -- # set +x 00:32:08.946 14:37:28 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:32:08.946 14:37:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:08.946 14:37:28 -- common/autotest_common.sh@10 -- # set +x 00:32:09.205 14:37:28 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:09.205 14:37:28 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:09.205 14:37:28 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:09.205 14:37:28 -- spdk/autotest.sh@395 -- # hash lcov 00:32:09.205 14:37:28 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:09.205 14:37:28 -- spdk/autotest.sh@397 -- # hostname 00:32:09.205 14:37:28 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:09.205 geninfo: WARNING: invalid characters removed from testname! 00:32:35.761 14:37:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:35.761 14:37:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:38.294 14:37:57 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:40.825 14:38:00 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:43.357 14:38:02 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:45.914 14:38:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:48.444 14:38:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:48.444 14:38:08 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:48.444 14:38:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:48.444 14:38:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.444 14:38:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.444 14:38:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.444 14:38:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.444 14:38:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.444 14:38:08 -- paths/export.sh@5 -- $ export PATH 00:32:48.444 14:38:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.444 14:38:08 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:48.444 14:38:08 -- common/autobuild_common.sh@447 -- $ date +%s 00:32:48.444 14:38:08 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1722004688.XXXXXX 00:32:48.444 14:38:08 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1722004688.b0ZN16 00:32:48.444 14:38:08 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:32:48.444 14:38:08 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:32:48.444 14:38:08 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:48.444 14:38:08 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:48.444 14:38:08 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:48.444 14:38:08 -- common/autobuild_common.sh@463 -- $ get_config_params 00:32:48.444 14:38:08 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:32:48.444 14:38:08 -- common/autotest_common.sh@10 -- $ set +x 00:32:48.444 14:38:08 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:32:48.444 14:38:08 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:32:48.444 14:38:08 -- pm/common@17 -- $ local monitor 00:32:48.444 14:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:48.444 14:38:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:48.444 14:38:08 -- pm/common@25 -- $ sleep 1 00:32:48.444 14:38:08 -- pm/common@21 -- $ date +%s 00:32:48.444 14:38:08 -- pm/common@21 -- $ date +%s 00:32:48.444 14:38:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1722004688 00:32:48.444 14:38:08 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1722004688 00:32:48.444 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1722004688_collect-vmstat.pm.log 00:32:48.444 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1722004688_collect-cpu-load.pm.log 00:32:49.380 14:38:09 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:32:49.380 14:38:09 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:49.380 14:38:09 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:49.380 14:38:09 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:49.380 14:38:09 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:49.380 14:38:09 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:49.380 14:38:09 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:49.380 14:38:09 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:49.380 14:38:09 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:49.380 14:38:09 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:49.638 14:38:09 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:49.639 14:38:09 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:49.639 14:38:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:49.639 14:38:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:49.639 14:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.639 14:38:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:49.639 14:38:09 -- pm/common@44 -- $ pid=89059 00:32:49.639 14:38:09 -- pm/common@50 -- $ kill -TERM 89059 00:32:49.639 14:38:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:49.639 14:38:09 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:49.639 14:38:09 -- pm/common@44 -- $ pid=89061 00:32:49.639 14:38:09 -- pm/common@50 -- $ kill -TERM 89061 00:32:49.639 + [[ -n 5204 ]] 00:32:49.639 + sudo kill 5204 00:32:49.648 [Pipeline] } 00:32:49.668 [Pipeline] // timeout 00:32:49.674 [Pipeline] } 00:32:49.691 [Pipeline] // stage 00:32:49.697 [Pipeline] } 00:32:49.715 [Pipeline] // catchError 00:32:49.725 [Pipeline] stage 00:32:49.728 [Pipeline] { (Stop VM) 00:32:49.744 [Pipeline] sh 00:32:50.027 + vagrant halt 00:32:53.339 ==> default: Halting domain... 00:32:59.913 [Pipeline] sh 00:33:00.192 + vagrant destroy -f 00:33:03.479 ==> default: Removing domain... 00:33:03.501 [Pipeline] sh 00:33:03.783 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:33:03.791 [Pipeline] } 00:33:03.807 [Pipeline] // stage 00:33:03.811 [Pipeline] } 00:33:03.824 [Pipeline] // dir 00:33:03.829 [Pipeline] } 00:33:03.842 [Pipeline] // wrap 00:33:03.848 [Pipeline] } 00:33:03.859 [Pipeline] // catchError 00:33:03.867 [Pipeline] stage 00:33:03.868 [Pipeline] { (Epilogue) 00:33:03.879 [Pipeline] sh 00:33:04.156 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:09.436 [Pipeline] catchError 00:33:09.438 [Pipeline] { 00:33:09.455 [Pipeline] sh 00:33:09.752 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:10.011 Artifacts sizes are good 00:33:10.021 [Pipeline] } 00:33:10.040 [Pipeline] // catchError 00:33:10.064 [Pipeline] archiveArtifacts 00:33:10.072 Archiving artifacts 00:33:10.215 [Pipeline] cleanWs 00:33:10.228 [WS-CLEANUP] Deleting project workspace... 00:33:10.228 [WS-CLEANUP] Deferred wipeout is used... 00:33:10.235 [WS-CLEANUP] done 00:33:10.237 [Pipeline] } 00:33:10.257 [Pipeline] // stage 00:33:10.264 [Pipeline] } 00:33:10.281 [Pipeline] // node 00:33:10.288 [Pipeline] End of Pipeline 00:33:10.341 Finished: SUCCESS